Test Report: KVM_Linux_crio 19501

                    
                      483e94d4f5cf3f9f4d946099f728195390e8d80c:2024-08-26:35948
                    
                

Test fail (30/312)

Order failed test Duration
34 TestAddons/parallel/Ingress 153.97
36 TestAddons/parallel/MetricsServer 358.23
45 TestAddons/StoppedEnableDisable 154.43
164 TestMultiControlPlane/serial/StopSecondaryNode 142.03
166 TestMultiControlPlane/serial/RestartSecondaryNode 53.19
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 395.14
171 TestMultiControlPlane/serial/StopCluster 141.87
231 TestMultiNode/serial/RestartKeepsNodes 328.57
233 TestMultiNode/serial/StopMultiNode 141.52
240 TestPreload 178.99
248 TestKubernetesUpgrade 387.76
284 TestStartStop/group/old-k8s-version/serial/FirstStart 292.14
285 TestPause/serial/SecondStartNoReconfiguration 57.32
293 TestStartStop/group/no-preload/serial/Stop 139.11
295 TestStartStop/group/embed-certs/serial/Stop 139.18
298 TestStartStop/group/old-k8s-version/serial/DeployApp 0.58
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 110.32
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.03
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
309 TestStartStop/group/old-k8s-version/serial/SecondStart 740.7
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.44
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.43
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.46
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.61
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 408.4
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 532.76
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 335.77
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 111.73
x
+
TestAddons/parallel/Ingress (153.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-530639 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-530639 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-530639 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004278972s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-530639 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.179530617s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-530639 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.11
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-530639 addons disable ingress --alsologtostderr -v=1: (7.744923403s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-530639 -n addons-530639
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-530639 logs -n 25: (1.238698431s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-232599                                                                     | download-only-232599 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:47 UTC |
	| delete  | -p download-only-210128                                                                     | download-only-210128 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-754943 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC |                     |
	|         | binary-mirror-754943                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44369                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-754943                                                                     | binary-mirror-754943 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC |                     |
	|         | addons-530639                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC |                     |
	|         | addons-530639                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-530639 --wait=true                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:49 UTC | 26 Aug 24 10:50 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | addons-530639                                                                               |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-530639 ssh cat                                                                       | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | /opt/local-path-provisioner/pvc-d9488103-fa6b-4b30-86cd-3775be1f0d86_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-530639 ip                                                                            | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | -p addons-530639                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | -p addons-530639                                                                            |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:51 UTC |
	|         | addons-530639                                                                               |                      |         |         |                     |                     |
	| addons  | addons-530639 addons                                                                        | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-530639 addons                                                                        | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:51 UTC | 26 Aug 24 10:51 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-530639 ssh curl -s                                                                   | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-530639 ip                                                                            | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:53 UTC | 26 Aug 24 10:53 UTC |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:53 UTC | 26 Aug 24 10:53 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:53 UTC | 26 Aug 24 10:53 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 10:47:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 10:47:13.891081  107298 out.go:345] Setting OutFile to fd 1 ...
	I0826 10:47:13.891202  107298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 10:47:13.891211  107298 out.go:358] Setting ErrFile to fd 2...
	I0826 10:47:13.891216  107298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 10:47:13.891445  107298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 10:47:13.892104  107298 out.go:352] Setting JSON to false
	I0826 10:47:13.893230  107298 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1775,"bootTime":1724667459,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 10:47:13.893295  107298 start.go:139] virtualization: kvm guest
	I0826 10:47:13.895574  107298 out.go:177] * [addons-530639] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 10:47:13.896870  107298 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 10:47:13.896896  107298 notify.go:220] Checking for updates...
	I0826 10:47:13.899513  107298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 10:47:13.900862  107298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 10:47:13.902276  107298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 10:47:13.903614  107298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 10:47:13.904817  107298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 10:47:13.906381  107298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 10:47:13.940293  107298 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 10:47:13.941848  107298 start.go:297] selected driver: kvm2
	I0826 10:47:13.941879  107298 start.go:901] validating driver "kvm2" against <nil>
	I0826 10:47:13.941894  107298 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 10:47:13.942638  107298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 10:47:13.942727  107298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 10:47:13.958770  107298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 10:47:13.958864  107298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 10:47:13.959094  107298 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 10:47:13.959168  107298 cni.go:84] Creating CNI manager for ""
	I0826 10:47:13.959181  107298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 10:47:13.959188  107298 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 10:47:13.959247  107298 start.go:340] cluster config:
	{Name:addons-530639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 10:47:13.959744  107298 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 10:47:13.961916  107298 out.go:177] * Starting "addons-530639" primary control-plane node in "addons-530639" cluster
	I0826 10:47:13.963150  107298 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 10:47:13.963209  107298 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 10:47:13.963223  107298 cache.go:56] Caching tarball of preloaded images
	I0826 10:47:13.963330  107298 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 10:47:13.963345  107298 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 10:47:13.963664  107298 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/config.json ...
	I0826 10:47:13.963692  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/config.json: {Name:mkafa60e91b41cce64f8251eb832bc8cf14e0b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:13.963899  107298 start.go:360] acquireMachinesLock for addons-530639: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 10:47:13.963968  107298 start.go:364] duration metric: took 48.469µs to acquireMachinesLock for "addons-530639"
	I0826 10:47:13.963997  107298 start.go:93] Provisioning new machine with config: &{Name:addons-530639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 10:47:13.964065  107298 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 10:47:13.965962  107298 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0826 10:47:13.966101  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:13.966131  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:13.981368  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0826 10:47:13.981911  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:13.982506  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:13.982549  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:13.982896  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:13.983045  107298 main.go:141] libmachine: (addons-530639) Calling .GetMachineName
	I0826 10:47:13.983204  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:13.983324  107298 start.go:159] libmachine.API.Create for "addons-530639" (driver="kvm2")
	I0826 10:47:13.983353  107298 client.go:168] LocalClient.Create starting
	I0826 10:47:13.983396  107298 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 10:47:14.061324  107298 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 10:47:14.343346  107298 main.go:141] libmachine: Running pre-create checks...
	I0826 10:47:14.343378  107298 main.go:141] libmachine: (addons-530639) Calling .PreCreateCheck
	I0826 10:47:14.343909  107298 main.go:141] libmachine: (addons-530639) Calling .GetConfigRaw
	I0826 10:47:14.344344  107298 main.go:141] libmachine: Creating machine...
	I0826 10:47:14.344363  107298 main.go:141] libmachine: (addons-530639) Calling .Create
	I0826 10:47:14.344496  107298 main.go:141] libmachine: (addons-530639) Creating KVM machine...
	I0826 10:47:14.345734  107298 main.go:141] libmachine: (addons-530639) DBG | found existing default KVM network
	I0826 10:47:14.346582  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.346412  107321 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f330}
	I0826 10:47:14.346611  107298 main.go:141] libmachine: (addons-530639) DBG | created network xml: 
	I0826 10:47:14.346627  107298 main.go:141] libmachine: (addons-530639) DBG | <network>
	I0826 10:47:14.346640  107298 main.go:141] libmachine: (addons-530639) DBG |   <name>mk-addons-530639</name>
	I0826 10:47:14.346685  107298 main.go:141] libmachine: (addons-530639) DBG |   <dns enable='no'/>
	I0826 10:47:14.346712  107298 main.go:141] libmachine: (addons-530639) DBG |   
	I0826 10:47:14.346767  107298 main.go:141] libmachine: (addons-530639) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0826 10:47:14.346804  107298 main.go:141] libmachine: (addons-530639) DBG |     <dhcp>
	I0826 10:47:14.346846  107298 main.go:141] libmachine: (addons-530639) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0826 10:47:14.346877  107298 main.go:141] libmachine: (addons-530639) DBG |     </dhcp>
	I0826 10:47:14.346892  107298 main.go:141] libmachine: (addons-530639) DBG |   </ip>
	I0826 10:47:14.346904  107298 main.go:141] libmachine: (addons-530639) DBG |   
	I0826 10:47:14.346922  107298 main.go:141] libmachine: (addons-530639) DBG | </network>
	I0826 10:47:14.346939  107298 main.go:141] libmachine: (addons-530639) DBG | 
	I0826 10:47:14.352246  107298 main.go:141] libmachine: (addons-530639) DBG | trying to create private KVM network mk-addons-530639 192.168.39.0/24...
	I0826 10:47:14.421042  107298 main.go:141] libmachine: (addons-530639) DBG | private KVM network mk-addons-530639 192.168.39.0/24 created
	I0826 10:47:14.421083  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.421018  107321 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 10:47:14.421107  107298 main.go:141] libmachine: (addons-530639) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639 ...
	I0826 10:47:14.421125  107298 main.go:141] libmachine: (addons-530639) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 10:47:14.421146  107298 main.go:141] libmachine: (addons-530639) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 10:47:14.686979  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.686792  107321 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa...
	I0826 10:47:14.850851  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.850658  107321 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/addons-530639.rawdisk...
	I0826 10:47:14.850909  107298 main.go:141] libmachine: (addons-530639) DBG | Writing magic tar header
	I0826 10:47:14.850949  107298 main.go:141] libmachine: (addons-530639) DBG | Writing SSH key tar header
	I0826 10:47:14.850985  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639 (perms=drwx------)
	I0826 10:47:14.850999  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.850786  107321 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639 ...
	I0826 10:47:14.851017  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639
	I0826 10:47:14.851031  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 10:47:14.851043  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 10:47:14.851054  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 10:47:14.851062  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 10:47:14.851071  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 10:47:14.851077  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 10:47:14.851087  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 10:47:14.851092  107298 main.go:141] libmachine: (addons-530639) Creating domain...
	I0826 10:47:14.851106  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 10:47:14.851121  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 10:47:14.851134  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins
	I0826 10:47:14.851147  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home
	I0826 10:47:14.851153  107298 main.go:141] libmachine: (addons-530639) DBG | Skipping /home - not owner
	I0826 10:47:14.852204  107298 main.go:141] libmachine: (addons-530639) define libvirt domain using xml: 
	I0826 10:47:14.852237  107298 main.go:141] libmachine: (addons-530639) <domain type='kvm'>
	I0826 10:47:14.852269  107298 main.go:141] libmachine: (addons-530639)   <name>addons-530639</name>
	I0826 10:47:14.852293  107298 main.go:141] libmachine: (addons-530639)   <memory unit='MiB'>4000</memory>
	I0826 10:47:14.852300  107298 main.go:141] libmachine: (addons-530639)   <vcpu>2</vcpu>
	I0826 10:47:14.852306  107298 main.go:141] libmachine: (addons-530639)   <features>
	I0826 10:47:14.852335  107298 main.go:141] libmachine: (addons-530639)     <acpi/>
	I0826 10:47:14.852357  107298 main.go:141] libmachine: (addons-530639)     <apic/>
	I0826 10:47:14.852367  107298 main.go:141] libmachine: (addons-530639)     <pae/>
	I0826 10:47:14.852383  107298 main.go:141] libmachine: (addons-530639)     
	I0826 10:47:14.852395  107298 main.go:141] libmachine: (addons-530639)   </features>
	I0826 10:47:14.852407  107298 main.go:141] libmachine: (addons-530639)   <cpu mode='host-passthrough'>
	I0826 10:47:14.852419  107298 main.go:141] libmachine: (addons-530639)   
	I0826 10:47:14.852435  107298 main.go:141] libmachine: (addons-530639)   </cpu>
	I0826 10:47:14.852448  107298 main.go:141] libmachine: (addons-530639)   <os>
	I0826 10:47:14.852458  107298 main.go:141] libmachine: (addons-530639)     <type>hvm</type>
	I0826 10:47:14.852467  107298 main.go:141] libmachine: (addons-530639)     <boot dev='cdrom'/>
	I0826 10:47:14.852478  107298 main.go:141] libmachine: (addons-530639)     <boot dev='hd'/>
	I0826 10:47:14.852502  107298 main.go:141] libmachine: (addons-530639)     <bootmenu enable='no'/>
	I0826 10:47:14.852520  107298 main.go:141] libmachine: (addons-530639)   </os>
	I0826 10:47:14.852536  107298 main.go:141] libmachine: (addons-530639)   <devices>
	I0826 10:47:14.852553  107298 main.go:141] libmachine: (addons-530639)     <disk type='file' device='cdrom'>
	I0826 10:47:14.852568  107298 main.go:141] libmachine: (addons-530639)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/boot2docker.iso'/>
	I0826 10:47:14.852582  107298 main.go:141] libmachine: (addons-530639)       <target dev='hdc' bus='scsi'/>
	I0826 10:47:14.852591  107298 main.go:141] libmachine: (addons-530639)       <readonly/>
	I0826 10:47:14.852598  107298 main.go:141] libmachine: (addons-530639)     </disk>
	I0826 10:47:14.852612  107298 main.go:141] libmachine: (addons-530639)     <disk type='file' device='disk'>
	I0826 10:47:14.852624  107298 main.go:141] libmachine: (addons-530639)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 10:47:14.852640  107298 main.go:141] libmachine: (addons-530639)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/addons-530639.rawdisk'/>
	I0826 10:47:14.852647  107298 main.go:141] libmachine: (addons-530639)       <target dev='hda' bus='virtio'/>
	I0826 10:47:14.852658  107298 main.go:141] libmachine: (addons-530639)     </disk>
	I0826 10:47:14.852670  107298 main.go:141] libmachine: (addons-530639)     <interface type='network'>
	I0826 10:47:14.852689  107298 main.go:141] libmachine: (addons-530639)       <source network='mk-addons-530639'/>
	I0826 10:47:14.852706  107298 main.go:141] libmachine: (addons-530639)       <model type='virtio'/>
	I0826 10:47:14.852719  107298 main.go:141] libmachine: (addons-530639)     </interface>
	I0826 10:47:14.852729  107298 main.go:141] libmachine: (addons-530639)     <interface type='network'>
	I0826 10:47:14.852740  107298 main.go:141] libmachine: (addons-530639)       <source network='default'/>
	I0826 10:47:14.852751  107298 main.go:141] libmachine: (addons-530639)       <model type='virtio'/>
	I0826 10:47:14.852763  107298 main.go:141] libmachine: (addons-530639)     </interface>
	I0826 10:47:14.852776  107298 main.go:141] libmachine: (addons-530639)     <serial type='pty'>
	I0826 10:47:14.852790  107298 main.go:141] libmachine: (addons-530639)       <target port='0'/>
	I0826 10:47:14.852801  107298 main.go:141] libmachine: (addons-530639)     </serial>
	I0826 10:47:14.852812  107298 main.go:141] libmachine: (addons-530639)     <console type='pty'>
	I0826 10:47:14.852829  107298 main.go:141] libmachine: (addons-530639)       <target type='serial' port='0'/>
	I0826 10:47:14.852842  107298 main.go:141] libmachine: (addons-530639)     </console>
	I0826 10:47:14.852856  107298 main.go:141] libmachine: (addons-530639)     <rng model='virtio'>
	I0826 10:47:14.852872  107298 main.go:141] libmachine: (addons-530639)       <backend model='random'>/dev/random</backend>
	I0826 10:47:14.852892  107298 main.go:141] libmachine: (addons-530639)     </rng>
	I0826 10:47:14.852904  107298 main.go:141] libmachine: (addons-530639)     
	I0826 10:47:14.852910  107298 main.go:141] libmachine: (addons-530639)     
	I0826 10:47:14.852923  107298 main.go:141] libmachine: (addons-530639)   </devices>
	I0826 10:47:14.852930  107298 main.go:141] libmachine: (addons-530639) </domain>
	I0826 10:47:14.852940  107298 main.go:141] libmachine: (addons-530639) 
	I0826 10:47:14.859516  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:5c:32:ad in network default
	I0826 10:47:14.860100  107298 main.go:141] libmachine: (addons-530639) Ensuring networks are active...
	I0826 10:47:14.860127  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:14.860765  107298 main.go:141] libmachine: (addons-530639) Ensuring network default is active
	I0826 10:47:14.861008  107298 main.go:141] libmachine: (addons-530639) Ensuring network mk-addons-530639 is active
	I0826 10:47:14.862197  107298 main.go:141] libmachine: (addons-530639) Getting domain xml...
	I0826 10:47:14.862863  107298 main.go:141] libmachine: (addons-530639) Creating domain...
	I0826 10:47:16.339355  107298 main.go:141] libmachine: (addons-530639) Waiting to get IP...
	I0826 10:47:16.340175  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:16.340590  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:16.340636  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:16.340576  107321 retry.go:31] will retry after 281.515746ms: waiting for machine to come up
	I0826 10:47:16.624344  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:16.624992  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:16.625025  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:16.624939  107321 retry.go:31] will retry after 243.037698ms: waiting for machine to come up
	I0826 10:47:16.869416  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:16.869844  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:16.869872  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:16.869804  107321 retry.go:31] will retry after 443.620624ms: waiting for machine to come up
	I0826 10:47:17.315571  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:17.316085  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:17.316114  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:17.316031  107321 retry.go:31] will retry after 426.309028ms: waiting for machine to come up
	I0826 10:47:17.743692  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:17.744176  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:17.744200  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:17.744127  107321 retry.go:31] will retry after 677.222999ms: waiting for machine to come up
	I0826 10:47:18.422949  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:18.423371  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:18.423395  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:18.423329  107321 retry.go:31] will retry after 656.330104ms: waiting for machine to come up
	I0826 10:47:19.081181  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:19.081613  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:19.081645  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:19.081567  107321 retry.go:31] will retry after 945.440779ms: waiting for machine to come up
	I0826 10:47:20.028865  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:20.029347  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:20.029372  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:20.029312  107321 retry.go:31] will retry after 1.142316945s: waiting for machine to come up
	I0826 10:47:21.173621  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:21.174133  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:21.174160  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:21.174063  107321 retry.go:31] will retry after 1.700752905s: waiting for machine to come up
	I0826 10:47:22.876921  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:22.877374  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:22.877402  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:22.877333  107321 retry.go:31] will retry after 1.812613042s: waiting for machine to come up
	I0826 10:47:24.691557  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:24.692071  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:24.692100  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:24.692003  107321 retry.go:31] will retry after 2.40737115s: waiting for machine to come up
	I0826 10:47:27.102520  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:27.103020  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:27.103043  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:27.102969  107321 retry.go:31] will retry after 2.73995796s: waiting for machine to come up
	I0826 10:47:29.844860  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:29.845393  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:29.845420  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:29.845349  107321 retry.go:31] will retry after 2.95503839s: waiting for machine to come up
	I0826 10:47:32.803660  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:32.804236  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:32.804269  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:32.804152  107321 retry.go:31] will retry after 4.473711544s: waiting for machine to come up
	I0826 10:47:37.281799  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.282440  107298 main.go:141] libmachine: (addons-530639) Found IP for machine: 192.168.39.11
	I0826 10:47:37.282468  107298 main.go:141] libmachine: (addons-530639) Reserving static IP address...
	I0826 10:47:37.282482  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has current primary IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.282934  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find host DHCP lease matching {name: "addons-530639", mac: "52:54:00:9e:aa:b3", ip: "192.168.39.11"} in network mk-addons-530639
	I0826 10:47:37.363213  107298 main.go:141] libmachine: (addons-530639) DBG | Getting to WaitForSSH function...
	I0826 10:47:37.363237  107298 main.go:141] libmachine: (addons-530639) Reserved static IP address: 192.168.39.11
	I0826 10:47:37.363249  107298 main.go:141] libmachine: (addons-530639) Waiting for SSH to be available...
	I0826 10:47:37.366127  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.366618  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.366647  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.366884  107298 main.go:141] libmachine: (addons-530639) DBG | Using SSH client type: external
	I0826 10:47:37.366922  107298 main.go:141] libmachine: (addons-530639) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa (-rw-------)
	I0826 10:47:37.366957  107298 main.go:141] libmachine: (addons-530639) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 10:47:37.366976  107298 main.go:141] libmachine: (addons-530639) DBG | About to run SSH command:
	I0826 10:47:37.366990  107298 main.go:141] libmachine: (addons-530639) DBG | exit 0
	I0826 10:47:37.503172  107298 main.go:141] libmachine: (addons-530639) DBG | SSH cmd err, output: <nil>: 
	I0826 10:47:37.503473  107298 main.go:141] libmachine: (addons-530639) KVM machine creation complete!
	I0826 10:47:37.503851  107298 main.go:141] libmachine: (addons-530639) Calling .GetConfigRaw
	I0826 10:47:37.504387  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:37.504599  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:37.504776  107298 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 10:47:37.504792  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:37.506161  107298 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 10:47:37.506176  107298 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 10:47:37.506181  107298 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 10:47:37.506187  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.508328  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.508646  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.508675  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.508814  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:37.509003  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.509148  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.509252  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:37.509439  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:37.509630  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:37.509640  107298 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 10:47:37.618194  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 10:47:37.618229  107298 main.go:141] libmachine: Detecting the provisioner...
	I0826 10:47:37.618244  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.621299  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.621674  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.621706  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.621828  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:37.622072  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.622231  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.622427  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:37.622594  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:37.622783  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:37.622797  107298 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 10:47:37.731832  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 10:47:37.731894  107298 main.go:141] libmachine: found compatible host: buildroot
	I0826 10:47:37.731904  107298 main.go:141] libmachine: Provisioning with buildroot...
	I0826 10:47:37.731919  107298 main.go:141] libmachine: (addons-530639) Calling .GetMachineName
	I0826 10:47:37.732182  107298 buildroot.go:166] provisioning hostname "addons-530639"
	I0826 10:47:37.732204  107298 main.go:141] libmachine: (addons-530639) Calling .GetMachineName
	I0826 10:47:37.732408  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.734947  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.735260  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.735292  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.735473  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:37.735684  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.735859  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.736002  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:37.736157  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:37.736342  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:37.736354  107298 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-530639 && echo "addons-530639" | sudo tee /etc/hostname
	I0826 10:47:37.856490  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-530639
	
	I0826 10:47:37.856517  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.859541  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.860082  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.860116  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.860323  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:37.860544  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.860764  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.860938  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:37.861125  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:37.861298  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:37.861313  107298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-530639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-530639/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-530639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 10:47:37.979646  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 10:47:37.979690  107298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 10:47:37.979714  107298 buildroot.go:174] setting up certificates
	I0826 10:47:37.979730  107298 provision.go:84] configureAuth start
	I0826 10:47:37.979744  107298 main.go:141] libmachine: (addons-530639) Calling .GetMachineName
	I0826 10:47:37.980119  107298 main.go:141] libmachine: (addons-530639) Calling .GetIP
	I0826 10:47:37.982722  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.983092  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.983121  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.983249  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.985190  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.985507  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.985536  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.985680  107298 provision.go:143] copyHostCerts
	I0826 10:47:37.985808  107298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 10:47:37.985961  107298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 10:47:37.986048  107298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 10:47:37.986119  107298 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.addons-530639 san=[127.0.0.1 192.168.39.11 addons-530639 localhost minikube]
	I0826 10:47:38.044501  107298 provision.go:177] copyRemoteCerts
	I0826 10:47:38.044567  107298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 10:47:38.044591  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.047475  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.047791  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.047820  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.048072  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.048291  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.048481  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.048631  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:38.133575  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 10:47:38.157412  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0826 10:47:38.181057  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 10:47:38.205113  107298 provision.go:87] duration metric: took 225.367648ms to configureAuth
	I0826 10:47:38.205151  107298 buildroot.go:189] setting minikube options for container-runtime
	I0826 10:47:38.205369  107298 config.go:182] Loaded profile config "addons-530639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 10:47:38.205477  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.208333  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.208704  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.208732  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.208919  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.209125  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.209299  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.209400  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.209617  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:38.209794  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:38.209809  107298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 10:47:38.482734  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 10:47:38.482760  107298 main.go:141] libmachine: Checking connection to Docker...
	I0826 10:47:38.482768  107298 main.go:141] libmachine: (addons-530639) Calling .GetURL
	I0826 10:47:38.484219  107298 main.go:141] libmachine: (addons-530639) DBG | Using libvirt version 6000000
	I0826 10:47:38.486655  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.486972  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.486998  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.487206  107298 main.go:141] libmachine: Docker is up and running!
	I0826 10:47:38.487225  107298 main.go:141] libmachine: Reticulating splines...
	I0826 10:47:38.487233  107298 client.go:171] duration metric: took 24.503868805s to LocalClient.Create
	I0826 10:47:38.487261  107298 start.go:167] duration metric: took 24.50393662s to libmachine.API.Create "addons-530639"
	I0826 10:47:38.487278  107298 start.go:293] postStartSetup for "addons-530639" (driver="kvm2")
	I0826 10:47:38.487291  107298 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 10:47:38.487308  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.487572  107298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 10:47:38.487608  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.489726  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.490014  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.490043  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.490237  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.490494  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.490672  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.490822  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:38.577010  107298 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 10:47:38.581059  107298 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 10:47:38.581090  107298 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 10:47:38.581162  107298 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 10:47:38.581191  107298 start.go:296] duration metric: took 93.904766ms for postStartSetup
	I0826 10:47:38.581226  107298 main.go:141] libmachine: (addons-530639) Calling .GetConfigRaw
	I0826 10:47:38.581839  107298 main.go:141] libmachine: (addons-530639) Calling .GetIP
	I0826 10:47:38.584692  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.585009  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.585042  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.585268  107298 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/config.json ...
	I0826 10:47:38.585470  107298 start.go:128] duration metric: took 24.621392499s to createHost
	I0826 10:47:38.585494  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.587646  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.587950  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.587989  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.588134  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.588335  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.588502  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.588635  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.588822  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:38.588986  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:38.588996  107298 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 10:47:38.700358  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724669258.678008754
	
	I0826 10:47:38.700392  107298 fix.go:216] guest clock: 1724669258.678008754
	I0826 10:47:38.700403  107298 fix.go:229] Guest: 2024-08-26 10:47:38.678008754 +0000 UTC Remote: 2024-08-26 10:47:38.585482553 +0000 UTC m=+24.731896412 (delta=92.526201ms)
	I0826 10:47:38.700467  107298 fix.go:200] guest clock delta is within tolerance: 92.526201ms
	I0826 10:47:38.700480  107298 start.go:83] releasing machines lock for "addons-530639", held for 24.736496664s
	I0826 10:47:38.700518  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.700853  107298 main.go:141] libmachine: (addons-530639) Calling .GetIP
	I0826 10:47:38.703640  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.703870  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.703909  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.704049  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.704723  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.704946  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.705033  107298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 10:47:38.705110  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.705165  107298 ssh_runner.go:195] Run: cat /version.json
	I0826 10:47:38.705186  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.708220  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.708255  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.708628  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.708660  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.708691  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.708729  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.708863  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.709019  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.709103  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.709185  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.709254  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.709320  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.709375  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:38.709473  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:38.826014  107298 ssh_runner.go:195] Run: systemctl --version
	I0826 10:47:38.832460  107298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 10:47:38.992806  107298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 10:47:38.998471  107298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 10:47:38.998541  107298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 10:47:39.014505  107298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 10:47:39.014545  107298 start.go:495] detecting cgroup driver to use...
	I0826 10:47:39.014620  107298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 10:47:39.031420  107298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 10:47:39.046003  107298 docker.go:217] disabling cri-docker service (if available) ...
	I0826 10:47:39.046072  107298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 10:47:39.060496  107298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 10:47:39.074552  107298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 10:47:39.192843  107298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 10:47:39.330617  107298 docker.go:233] disabling docker service ...
	I0826 10:47:39.330701  107298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 10:47:39.344635  107298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 10:47:39.357823  107298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 10:47:39.498024  107298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 10:47:39.635067  107298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 10:47:39.648519  107298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 10:47:39.666429  107298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 10:47:39.666498  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.676992  107298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 10:47:39.677063  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.687666  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.698328  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.708783  107298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 10:47:39.719674  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.730334  107298 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.748155  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.758549  107298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 10:47:39.767989  107298 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 10:47:39.768064  107298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 10:47:39.780334  107298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 10:47:39.790344  107298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 10:47:39.911965  107298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 10:47:40.050897  107298 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 10:47:40.051029  107298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 10:47:40.055751  107298 start.go:563] Will wait 60s for crictl version
	I0826 10:47:40.055824  107298 ssh_runner.go:195] Run: which crictl
	I0826 10:47:40.059511  107298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 10:47:40.098328  107298 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 10:47:40.098452  107298 ssh_runner.go:195] Run: crio --version
	I0826 10:47:40.130254  107298 ssh_runner.go:195] Run: crio --version
	I0826 10:47:40.159919  107298 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 10:47:40.161645  107298 main.go:141] libmachine: (addons-530639) Calling .GetIP
	I0826 10:47:40.164398  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:40.164710  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:40.164740  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:40.164999  107298 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 10:47:40.169201  107298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 10:47:40.181655  107298 kubeadm.go:883] updating cluster {Name:addons-530639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 10:47:40.181787  107298 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 10:47:40.181854  107298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 10:47:40.213812  107298 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 10:47:40.213899  107298 ssh_runner.go:195] Run: which lz4
	I0826 10:47:40.217589  107298 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 10:47:40.221614  107298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 10:47:40.221663  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 10:47:41.434431  107298 crio.go:462] duration metric: took 1.216879825s to copy over tarball
	I0826 10:47:41.434510  107298 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 10:47:43.720590  107298 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286015502s)
	I0826 10:47:43.720626  107298 crio.go:469] duration metric: took 2.286162048s to extract the tarball
	I0826 10:47:43.720635  107298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 10:47:43.757053  107298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 10:47:43.805221  107298 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 10:47:43.805256  107298 cache_images.go:84] Images are preloaded, skipping loading
	I0826 10:47:43.805265  107298 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.31.0 crio true true} ...
	I0826 10:47:43.805370  107298 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-530639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 10:47:43.805453  107298 ssh_runner.go:195] Run: crio config
	I0826 10:47:43.854319  107298 cni.go:84] Creating CNI manager for ""
	I0826 10:47:43.854342  107298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 10:47:43.854352  107298 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 10:47:43.854378  107298 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-530639 NodeName:addons-530639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 10:47:43.854539  107298 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-530639"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 10:47:43.854625  107298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 10:47:43.864633  107298 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 10:47:43.864706  107298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 10:47:43.874020  107298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0826 10:47:43.893729  107298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 10:47:43.912136  107298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0826 10:47:43.930704  107298 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0826 10:47:43.934543  107298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 10:47:43.946492  107298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 10:47:44.066769  107298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 10:47:44.083219  107298 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639 for IP: 192.168.39.11
	I0826 10:47:44.083254  107298 certs.go:194] generating shared ca certs ...
	I0826 10:47:44.083278  107298 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.083469  107298 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 10:47:44.317559  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt ...
	I0826 10:47:44.317596  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt: {Name:mk528fb032b1b203659bc7401a1f3339f9cb42ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.317787  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key ...
	I0826 10:47:44.317798  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key: {Name:mk4bc8d0deb4ba0b612b6025cf4860247a955bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.317880  107298 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 10:47:44.703309  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt ...
	I0826 10:47:44.703343  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt: {Name:mk1b2a7cf4acdf32adf1087f9ce8c82681815beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.703516  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key ...
	I0826 10:47:44.703527  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key: {Name:mkc22cf5578b106a539c82ed4fa8827886c75fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.703598  107298 certs.go:256] generating profile certs ...
	I0826 10:47:44.703665  107298 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.key
	I0826 10:47:44.703688  107298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt with IP's: []
	I0826 10:47:44.944510  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt ...
	I0826 10:47:44.944545  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: {Name:mk5d1f6fa9bb983f8038422980e3ca85392492c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.944723  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.key ...
	I0826 10:47:44.944736  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.key: {Name:mkce47dadd34f8bae607c80a4f1b0f0c86e63785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.944807  107298 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key.18822754
	I0826 10:47:44.944822  107298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt.18822754 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I0826 10:47:45.159568  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt.18822754 ...
	I0826 10:47:45.159600  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt.18822754: {Name:mk4ea42c643206796cbe3966cc77eecdfd68e79b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:45.159765  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key.18822754 ...
	I0826 10:47:45.159779  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key.18822754: {Name:mk5ca7443b8a31961552bdff2b9da9a94eb373bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:45.159848  107298 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt.18822754 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt
	I0826 10:47:45.159947  107298 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key.18822754 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key
	I0826 10:47:45.159992  107298 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.key
	I0826 10:47:45.160010  107298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.crt with IP's: []
	I0826 10:47:45.253413  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.crt ...
	I0826 10:47:45.253447  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.crt: {Name:mke4af5277de083767543982254016a55df6bcd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:45.253609  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.key ...
	I0826 10:47:45.253621  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.key: {Name:mk769f3d1ad841791efadfe6cfcaa93a94069403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:45.253845  107298 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 10:47:45.253883  107298 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 10:47:45.253906  107298 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 10:47:45.253932  107298 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 10:47:45.254524  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 10:47:45.286584  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 10:47:45.311045  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 10:47:45.337133  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 10:47:45.361748  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0826 10:47:45.387421  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 10:47:45.412750  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 10:47:45.437978  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 10:47:45.462180  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 10:47:45.486806  107298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 10:47:45.503413  107298 ssh_runner.go:195] Run: openssl version
	I0826 10:47:45.509191  107298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 10:47:45.520387  107298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 10:47:45.525349  107298 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 10:47:45.525450  107298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 10:47:45.531582  107298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 10:47:45.543172  107298 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 10:47:45.547872  107298 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 10:47:45.547945  107298 kubeadm.go:392] StartCluster: {Name:addons-530639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 10:47:45.548042  107298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 10:47:45.548122  107298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 10:47:45.586788  107298 cri.go:89] found id: ""
	I0826 10:47:45.586893  107298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 10:47:45.597479  107298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 10:47:45.607935  107298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 10:47:45.618174  107298 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 10:47:45.618206  107298 kubeadm.go:157] found existing configuration files:
	
	I0826 10:47:45.618255  107298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 10:47:45.628020  107298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 10:47:45.628116  107298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 10:47:45.638435  107298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 10:47:45.648280  107298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 10:47:45.648364  107298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 10:47:45.660893  107298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 10:47:45.670383  107298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 10:47:45.670484  107298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 10:47:45.684763  107298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 10:47:45.696352  107298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 10:47:45.696441  107298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 10:47:45.710879  107298 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 10:47:45.761454  107298 kubeadm.go:310] W0826 10:47:45.746260     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 10:47:45.762362  107298 kubeadm.go:310] W0826 10:47:45.747299     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 10:47:45.874243  107298 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 10:47:55.215839  107298 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 10:47:55.215941  107298 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 10:47:55.216064  107298 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 10:47:55.216160  107298 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 10:47:55.216274  107298 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 10:47:55.216451  107298 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 10:47:55.218290  107298 out.go:235]   - Generating certificates and keys ...
	I0826 10:47:55.218374  107298 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 10:47:55.218453  107298 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 10:47:55.218561  107298 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0826 10:47:55.218645  107298 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0826 10:47:55.218728  107298 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0826 10:47:55.218796  107298 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0826 10:47:55.218891  107298 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0826 10:47:55.219045  107298 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-530639 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0826 10:47:55.219111  107298 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0826 10:47:55.219222  107298 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-530639 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0826 10:47:55.219304  107298 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0826 10:47:55.219401  107298 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0826 10:47:55.219465  107298 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0826 10:47:55.219513  107298 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 10:47:55.219560  107298 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 10:47:55.219613  107298 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 10:47:55.219669  107298 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 10:47:55.219727  107298 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 10:47:55.219781  107298 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 10:47:55.219855  107298 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 10:47:55.219915  107298 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 10:47:55.221576  107298 out.go:235]   - Booting up control plane ...
	I0826 10:47:55.221665  107298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 10:47:55.221734  107298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 10:47:55.221827  107298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 10:47:55.221941  107298 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 10:47:55.222045  107298 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 10:47:55.222109  107298 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 10:47:55.222241  107298 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 10:47:55.222403  107298 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 10:47:55.222502  107298 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.237668ms
	I0826 10:47:55.222623  107298 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 10:47:55.222707  107298 kubeadm.go:310] [api-check] The API server is healthy after 5.503465218s
	I0826 10:47:55.222827  107298 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 10:47:55.222989  107298 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 10:47:55.223081  107298 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 10:47:55.223254  107298 kubeadm.go:310] [mark-control-plane] Marking the node addons-530639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 10:47:55.223335  107298 kubeadm.go:310] [bootstrap-token] Using token: 7wdj76.nlpbotovotxm4wlx
	I0826 10:47:55.224812  107298 out.go:235]   - Configuring RBAC rules ...
	I0826 10:47:55.224930  107298 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 10:47:55.225031  107298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 10:47:55.225156  107298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 10:47:55.225296  107298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 10:47:55.225434  107298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 10:47:55.225541  107298 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 10:47:55.225672  107298 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 10:47:55.225731  107298 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 10:47:55.225804  107298 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 10:47:55.225825  107298 kubeadm.go:310] 
	I0826 10:47:55.225910  107298 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 10:47:55.225923  107298 kubeadm.go:310] 
	I0826 10:47:55.225999  107298 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 10:47:55.226005  107298 kubeadm.go:310] 
	I0826 10:47:55.226026  107298 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 10:47:55.226080  107298 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 10:47:55.226124  107298 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 10:47:55.226129  107298 kubeadm.go:310] 
	I0826 10:47:55.226177  107298 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 10:47:55.226183  107298 kubeadm.go:310] 
	I0826 10:47:55.226224  107298 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 10:47:55.226230  107298 kubeadm.go:310] 
	I0826 10:47:55.226301  107298 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 10:47:55.226410  107298 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 10:47:55.226486  107298 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 10:47:55.226492  107298 kubeadm.go:310] 
	I0826 10:47:55.226559  107298 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 10:47:55.226622  107298 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 10:47:55.226629  107298 kubeadm.go:310] 
	I0826 10:47:55.226706  107298 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7wdj76.nlpbotovotxm4wlx \
	I0826 10:47:55.226805  107298 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 10:47:55.226859  107298 kubeadm.go:310] 	--control-plane 
	I0826 10:47:55.226870  107298 kubeadm.go:310] 
	I0826 10:47:55.226939  107298 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 10:47:55.226946  107298 kubeadm.go:310] 
	I0826 10:47:55.227015  107298 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7wdj76.nlpbotovotxm4wlx \
	I0826 10:47:55.227125  107298 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 10:47:55.227140  107298 cni.go:84] Creating CNI manager for ""
	I0826 10:47:55.227147  107298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 10:47:55.228673  107298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 10:47:55.229923  107298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 10:47:55.241477  107298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 10:47:55.264645  107298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 10:47:55.264734  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:55.264770  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-530639 minikube.k8s.io/updated_at=2024_08_26T10_47_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=addons-530639 minikube.k8s.io/primary=true
	I0826 10:47:55.286015  107298 ops.go:34] apiserver oom_adj: -16
	I0826 10:47:55.417683  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:55.917840  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:56.418559  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:56.918088  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:57.418085  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:57.917870  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:58.418358  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:58.918532  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:59.418415  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:59.504262  107298 kubeadm.go:1113] duration metric: took 4.239603265s to wait for elevateKubeSystemPrivileges
	I0826 10:47:59.504305  107298 kubeadm.go:394] duration metric: took 13.956365982s to StartCluster
	I0826 10:47:59.504326  107298 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:59.504479  107298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 10:47:59.504869  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:59.505077  107298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0826 10:47:59.505127  107298 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 10:47:59.505191  107298 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0826 10:47:59.505317  107298 addons.go:69] Setting yakd=true in profile "addons-530639"
	I0826 10:47:59.505315  107298 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-530639"
	I0826 10:47:59.505360  107298 addons.go:234] Setting addon yakd=true in "addons-530639"
	I0826 10:47:59.505348  107298 addons.go:69] Setting cloud-spanner=true in profile "addons-530639"
	I0826 10:47:59.505360  107298 addons.go:69] Setting metrics-server=true in profile "addons-530639"
	I0826 10:47:59.505394  107298 addons.go:69] Setting registry=true in profile "addons-530639"
	I0826 10:47:59.505402  107298 addons.go:234] Setting addon cloud-spanner=true in "addons-530639"
	I0826 10:47:59.505406  107298 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-530639"
	I0826 10:47:59.505422  107298 addons.go:69] Setting default-storageclass=true in profile "addons-530639"
	I0826 10:47:59.505442  107298 addons.go:69] Setting ingress-dns=true in profile "addons-530639"
	I0826 10:47:59.505447  107298 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-530639"
	I0826 10:47:59.505453  107298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-530639"
	I0826 10:47:59.505458  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505467  107298 addons.go:69] Setting inspektor-gadget=true in profile "addons-530639"
	I0826 10:47:59.505483  107298 addons.go:234] Setting addon inspektor-gadget=true in "addons-530639"
	I0826 10:47:59.505519  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505530  107298 addons.go:69] Setting gcp-auth=true in profile "addons-530639"
	I0826 10:47:59.505548  107298 mustload.go:65] Loading cluster: addons-530639
	I0826 10:47:59.505740  107298 config.go:182] Loaded profile config "addons-530639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 10:47:59.505398  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505888  107298 addons.go:69] Setting volcano=true in profile "addons-530639"
	I0826 10:47:59.505901  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.505918  107298 addons.go:234] Setting addon volcano=true in "addons-530639"
	I0826 10:47:59.505931  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.505940  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505944  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.506000  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506064  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506087  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506182  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506200  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506218  107298 addons.go:69] Setting volumesnapshots=true in profile "addons-530639"
	I0826 10:47:59.506249  107298 addons.go:234] Setting addon volumesnapshots=true in "addons-530639"
	I0826 10:47:59.506276  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.506292  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506326  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506457  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.505403  107298 addons.go:69] Setting helm-tiller=true in profile "addons-530639"
	I0826 10:47:59.506488  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506495  107298 addons.go:234] Setting addon helm-tiller=true in "addons-530639"
	I0826 10:47:59.506521  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.506557  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506576  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506644  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506665  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505460  107298 addons.go:234] Setting addon ingress-dns=true in "addons-530639"
	I0826 10:47:59.506809  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.506904  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506928  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505382  107298 addons.go:69] Setting storage-provisioner=true in profile "addons-530639"
	I0826 10:47:59.507188  107298 addons.go:234] Setting addon storage-provisioner=true in "addons-530639"
	I0826 10:47:59.507216  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.507225  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.507247  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505424  107298 addons.go:234] Setting addon metrics-server=true in "addons-530639"
	I0826 10:47:59.507286  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505384  107298 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-530639"
	I0826 10:47:59.508162  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.508527  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.508546  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505393  107298 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-530639"
	I0826 10:47:59.511059  107298 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-530639"
	I0826 10:47:59.511106  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.511490  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.511525  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.511674  107298 out.go:177] * Verifying Kubernetes components...
	I0826 10:47:59.505429  107298 addons.go:234] Setting addon registry=true in "addons-530639"
	I0826 10:47:59.511884  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.513175  107298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 10:47:59.505432  107298 addons.go:69] Setting ingress=true in profile "addons-530639"
	I0826 10:47:59.513335  107298 addons.go:234] Setting addon ingress=true in "addons-530639"
	I0826 10:47:59.513381  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505367  107298 config.go:182] Loaded profile config "addons-530639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 10:47:59.527591  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35559
	I0826 10:47:59.527820  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0826 10:47:59.527947  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I0826 10:47:59.528365  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.528496  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.529049  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.529069  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.529094  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.529112  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.529449  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.529519  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0826 10:47:59.529547  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.529744  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.529822  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.530407  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.530450  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.530508  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.530702  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.530714  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.531139  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.531159  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.531231  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.531739  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.532438  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.532486  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.534681  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0826 10:47:59.536819  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0826 10:47:59.539188  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539245  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.539322  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539342  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.539191  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539390  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.539449  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539467  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.539674  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539719  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.540206  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.540333  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.540859  107298 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-530639"
	I0826 10:47:59.540912  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.541284  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.541328  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.542036  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.542063  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.542220  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.542241  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.542287  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0826 10:47:59.542478  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.545603  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.545743  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.545801  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.548072  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.548099  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.548787  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.548833  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.549073  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.549703  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.549750  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.550316  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41087
	I0826 10:47:59.550963  107298 addons.go:234] Setting addon default-storageclass=true in "addons-530639"
	I0826 10:47:59.551006  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.551343  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.551375  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.555357  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.555904  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.555925  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.556254  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.556424  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.558641  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.559245  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.559306  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.569207  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0826 10:47:59.569987  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.570655  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.570686  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.571168  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.571794  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.571851  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.579058  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0826 10:47:59.579656  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.580205  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.580230  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.580650  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.581346  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.581394  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.581638  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45099
	I0826 10:47:59.582191  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.582822  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.582854  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.583250  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.583497  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.585043  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0826 10:47:59.585537  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.586181  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.586555  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:47:59.586574  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:47:59.586921  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:47:59.586957  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:47:59.586965  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:47:59.586973  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:47:59.586983  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:47:59.587188  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35411
	I0826 10:47:59.587645  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:47:59.588130  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.588149  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.588458  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.588751  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.588900  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.588915  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.589495  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.589545  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.589621  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I0826 10:47:59.590089  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.590672  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.590691  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.591332  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.592135  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.592172  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.592642  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0826 10:47:59.593406  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.594081  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.594097  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.594481  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.594658  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45139
	I0826 10:47:59.594886  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.595167  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.595192  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.595358  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.595713  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.595731  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.596161  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.597120  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.597146  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.597405  107298 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0826 10:47:59.597430  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	W0826 10:47:59.597529  107298 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0826 10:47:59.598626  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I0826 10:47:59.599085  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.599558  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.600254  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.600281  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.600442  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.600779  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.601340  107298 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0826 10:47:59.602341  107298 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0826 10:47:59.603138  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40923
	I0826 10:47:59.603214  107298 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0826 10:47:59.603236  107298 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0826 10:47:59.603260  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.603740  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.604040  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0826 10:47:59.604063  107298 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0826 10:47:59.604094  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.605839  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.605868  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.606390  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38091
	I0826 10:47:59.606768  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.606789  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.607271  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.607691  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.607939  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.608155  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.608508  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.608983  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.609024  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.609265  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.609289  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.609311  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.609323  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0826 10:47:59.609717  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.609851  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.610178  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.610262  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.610276  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.610281  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.610298  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.610422  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.610557  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.610912  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.611252  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.611467  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.611485  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.611541  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.611754  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.612003  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.612822  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.613112  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.615230  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39051
	I0826 10:47:59.615443  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.615507  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0826 10:47:59.616155  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.616690  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.616716  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.617262  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.617748  107298 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0826 10:47:59.618095  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.618116  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.618118  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.618291  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.618643  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.618810  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0826 10:47:59.619348  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.619870  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.619887  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.619905  107298 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0826 10:47:59.619923  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0826 10:47:59.619943  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.620248  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.620394  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.621284  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.623296  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.623311  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.625097  107298 out.go:177]   - Using image docker.io/busybox:stable
	I0826 10:47:59.625097  107298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 10:47:59.625543  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.625983  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.626022  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.626060  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I0826 10:47:59.626630  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.626691  107298 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 10:47:59.626706  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 10:47:59.626726  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.627497  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.627516  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.627996  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.628204  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0826 10:47:59.628278  107298 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0826 10:47:59.628393  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.628581  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.628726  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.628779  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.628824  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.628899  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.629337  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.629917  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.629934  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.630417  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.630650  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.632120  107298 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0826 10:47:59.632145  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0826 10:47:59.632167  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.632336  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0826 10:47:59.632821  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.633617  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.634931  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.634952  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.635027  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.635353  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.635388  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.635646  107298 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0826 10:47:59.635673  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.635937  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.636234  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.636309  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.636356  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.636376  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.636597  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.636793  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.636969  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.637121  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.637184  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.637232  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.637453  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.637654  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.638493  107298 out.go:177]   - Using image docker.io/registry:2.8.3
	I0826 10:47:59.640091  107298 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0826 10:47:59.640110  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0826 10:47:59.640128  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.643798  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.644319  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.644367  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.644600  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.644701  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I0826 10:47:59.645080  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.645283  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.645359  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.645627  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.646306  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0826 10:47:59.646986  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.647653  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.647671  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.647733  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0826 10:47:59.648006  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.648020  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.648075  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.648482  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.648534  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.649050  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.649103  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.650185  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.650207  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.650709  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.651031  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.651216  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.652057  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.653499  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.653550  107298 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0826 10:47:59.653669  107298 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0826 10:47:59.654908  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I0826 10:47:59.654978  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0826 10:47:59.654990  107298 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 10:47:59.655046  107298 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 10:47:59.655067  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.655111  107298 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0826 10:47:59.655125  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0826 10:47:59.655144  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.656085  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I0826 10:47:59.656228  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0826 10:47:59.656243  107298 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0826 10:47:59.656252  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0826 10:47:59.656262  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.656270  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.656939  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.657405  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.657503  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.657524  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.657935  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.657955  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.658074  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.658095  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.658108  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.658450  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.658453  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.658932  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.659323  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.659366  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.660321  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.660421  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.660984  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.661020  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.661179  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.661339  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.661450  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.661550  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.662004  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.662140  107298 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0826 10:47:59.662341  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.663091  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.663127  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.663657  107298 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0826 10:47:59.663677  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0826 10:47:59.663702  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0826 10:47:59.663767  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.663831  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.663864  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.663987  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.664179  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.664306  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.664319  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.665023  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.665100  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.665666  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.665881  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.665995  107298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0826 10:47:59.666014  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0826 10:47:59.666147  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.666299  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.666713  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.667239  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.667333  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.667383  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.667578  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.667740  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.667935  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.668510  107298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0826 10:47:59.668564  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0826 10:47:59.669127  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0826 10:47:59.669256  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46645
	I0826 10:47:59.669636  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.669750  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.670212  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.670226  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.670234  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.670238  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.670528  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.670578  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.670765  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.670771  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.670963  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0826 10:47:59.670978  107298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0826 10:47:59.672196  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0826 10:47:59.672461  107298 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0826 10:47:59.672484  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0826 10:47:59.672505  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.672552  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.673680  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.674048  107298 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 10:47:59.674059  107298 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 10:47:59.674073  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.674242  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0826 10:47:59.674247  107298 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0826 10:47:59.675899  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0826 10:47:59.675979  107298 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0826 10:47:59.676005  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0826 10:47:59.676024  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.676166  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.676760  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.676781  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.677054  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.677277  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.677738  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.677923  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.678360  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0826 10:47:59.678559  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.679218  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.679245  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.679338  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.679418  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.679602  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.679626  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.679651  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.679730  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0826 10:47:59.679747  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0826 10:47:59.679782  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.679808  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.679843  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.679998  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.680018  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.680139  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.680266  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.682389  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.682927  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.682957  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.683154  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.683361  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.683596  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.683759  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	W0826 10:47:59.718139  107298 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49514->192.168.39.11:22: read: connection reset by peer
	I0826 10:47:59.718193  107298 retry.go:31] will retry after 323.018998ms: ssh: handshake failed: read tcp 192.168.39.1:49514->192.168.39.11:22: read: connection reset by peer
	W0826 10:47:59.718263  107298 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49518->192.168.39.11:22: read: connection reset by peer
	I0826 10:47:59.718273  107298 retry.go:31] will retry after 352.73951ms: ssh: handshake failed: read tcp 192.168.39.1:49518->192.168.39.11:22: read: connection reset by peer
	I0826 10:47:59.882220  107298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 10:47:59.882259  107298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0826 10:47:59.912702  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0826 10:47:59.951500  107298 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0826 10:47:59.951534  107298 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0826 10:47:59.964048  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0826 10:47:59.975067  107298 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0826 10:47:59.975108  107298 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0826 10:47:59.978819  107298 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 10:47:59.978870  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0826 10:47:59.988578  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 10:47:59.989630  107298 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0826 10:47:59.989653  107298 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0826 10:48:00.019383  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 10:48:00.036394  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0826 10:48:00.063218  107298 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0826 10:48:00.063254  107298 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0826 10:48:00.065708  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0826 10:48:00.071292  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0826 10:48:00.071325  107298 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0826 10:48:00.102916  107298 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0826 10:48:00.102950  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0826 10:48:00.131548  107298 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0826 10:48:00.131588  107298 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0826 10:48:00.134646  107298 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0826 10:48:00.134670  107298 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0826 10:48:00.153844  107298 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 10:48:00.153868  107298 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 10:48:00.263952  107298 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0826 10:48:00.263983  107298 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0826 10:48:00.264366  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0826 10:48:00.264394  107298 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0826 10:48:00.279725  107298 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0826 10:48:00.279754  107298 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0826 10:48:00.322403  107298 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0826 10:48:00.322434  107298 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0826 10:48:00.348642  107298 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 10:48:00.348669  107298 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 10:48:00.365055  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0826 10:48:00.431617  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0826 10:48:00.431664  107298 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0826 10:48:00.434977  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0826 10:48:00.435010  107298 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0826 10:48:00.452827  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0826 10:48:00.481437  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 10:48:00.494021  107298 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0826 10:48:00.494063  107298 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0826 10:48:00.538185  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0826 10:48:00.548517  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0826 10:48:00.548554  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0826 10:48:00.575978  107298 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0826 10:48:00.576003  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0826 10:48:00.618283  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0826 10:48:00.618319  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0826 10:48:00.702315  107298 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0826 10:48:00.702347  107298 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0826 10:48:00.835544  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0826 10:48:00.899939  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0826 10:48:00.966513  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0826 10:48:00.966554  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0826 10:48:01.045598  107298 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0826 10:48:01.045640  107298 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0826 10:48:01.219693  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0826 10:48:01.219724  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0826 10:48:01.306204  107298 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0826 10:48:01.306233  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0826 10:48:01.372437  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0826 10:48:01.372469  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0826 10:48:01.440430  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0826 10:48:01.610601  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0826 10:48:01.610639  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0826 10:48:01.758128  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0826 10:48:01.758155  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0826 10:48:01.840983  107298 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.958690198s)
	I0826 10:48:01.841015  107298 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0826 10:48:01.841028  107298 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.958771105s)
	I0826 10:48:01.841141  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.928405448s)
	I0826 10:48:01.841201  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:01.841215  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:01.841651  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:01.841690  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:01.841707  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:01.841764  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:01.841785  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:01.842082  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:01.842114  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:01.842139  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:01.842108  107298 node_ready.go:35] waiting up to 6m0s for node "addons-530639" to be "Ready" ...
	I0826 10:48:01.847028  107298 node_ready.go:49] node "addons-530639" has status "Ready":"True"
	I0826 10:48:01.847059  107298 node_ready.go:38] duration metric: took 4.835462ms for node "addons-530639" to be "Ready" ...
	I0826 10:48:01.847073  107298 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 10:48:01.871750  107298 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:02.151015  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0826 10:48:02.151052  107298 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0826 10:48:02.347148  107298 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-530639" context rescaled to 1 replicas
	I0826 10:48:02.446275  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0826 10:48:02.446314  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0826 10:48:02.797322  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0826 10:48:02.797346  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0826 10:48:02.899791  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0826 10:48:02.899825  107298 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0826 10:48:03.164016  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0826 10:48:03.881628  107298 pod_ready.go:103] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:05.918274  107298 pod_ready.go:103] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:06.634167  107298 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0826 10:48:06.634225  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:48:06.637305  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:48:06.637774  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:48:06.637817  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:48:06.637998  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:48:06.638256  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:48:06.638399  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:48:06.638531  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:48:07.149028  107298 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0826 10:48:07.357403  107298 addons.go:234] Setting addon gcp-auth=true in "addons-530639"
	I0826 10:48:07.357469  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:48:07.357867  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:48:07.357906  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:48:07.374389  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
	I0826 10:48:07.374976  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:48:07.375555  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:48:07.375585  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:48:07.376011  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:48:07.376515  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:48:07.376542  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:48:07.393088  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0826 10:48:07.393617  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:48:07.394153  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:48:07.394182  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:48:07.394550  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:48:07.394803  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:48:07.396415  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:48:07.396698  107298 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0826 10:48:07.396723  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:48:07.399468  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:48:07.399895  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:48:07.399925  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:48:07.400104  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:48:07.400298  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:48:07.400494  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:48:07.400725  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:48:08.319296  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.355208577s)
	I0826 10:48:08.319355  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319368  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319417  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.33080134s)
	I0826 10:48:08.319468  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319483  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319511  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.283088334s)
	I0826 10:48:08.319483  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.300065129s)
	I0826 10:48:08.319557  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319538  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319606  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319630  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.253892816s)
	I0826 10:48:08.319665  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319573  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319676  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319722  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.954632116s)
	I0826 10:48:08.319750  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319759  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319762  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.866878458s)
	I0826 10:48:08.319785  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319794  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319861  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.838395324s)
	I0826 10:48:08.319883  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319893  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319938  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.781712131s)
	I0826 10:48:08.319968  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319977  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319986  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.484403191s)
	I0826 10:48:08.320005  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320014  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320130  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.420145979s)
	W0826 10:48:08.320164  107298 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0826 10:48:08.320208  107298 retry.go:31] will retry after 306.411063ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0826 10:48:08.320235  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.87975339s)
	I0826 10:48:08.320263  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320275  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320590  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320597  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320625  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320630  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320645  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320646  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320655  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320657  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320665  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320645  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320679  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320689  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320665  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320711  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320713  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320801  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320803  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320840  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320848  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320860  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320874  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320909  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320924  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320936  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320951  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320962  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320989  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321005  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321010  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321040  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.321058  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.321262  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321306  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321313  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321472  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321495  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321502  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321522  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.321530  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320822  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321582  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321591  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.321598  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.321652  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321675  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321682  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321890  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321914  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321920  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320929  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.323192  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.323207  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.323632  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.323668  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.323676  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325398  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325452  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.325465  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325476  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.325483  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.325565  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325570  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.325580  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325590  107298 addons.go:475] Verifying addon metrics-server=true in "addons-530639"
	I0826 10:48:08.325615  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325643  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.325651  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325924  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325942  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325965  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.325971  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325981  107298 addons.go:475] Verifying addon registry=true in "addons-530639"
	I0826 10:48:08.326345  107298 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-530639 service yakd-dashboard -n yakd-dashboard
	
	I0826 10:48:08.326822  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.326878  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.326889  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.327094  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.327109  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.327118  107298 addons.go:475] Verifying addon ingress=true in "addons-530639"
	I0826 10:48:08.327260  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.327438  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.327283  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.327866  107298 out.go:177] * Verifying registry addon...
	I0826 10:48:08.328329  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.328349  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.328365  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.328374  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.328643  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.328666  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.328703  107298 out.go:177] * Verifying ingress addon...
	I0826 10:48:08.330314  107298 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0826 10:48:08.330740  107298 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0826 10:48:08.388224  107298 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0826 10:48:08.388251  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:08.389037  107298 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0826 10:48:08.389068  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:08.394603  107298 pod_ready.go:103] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:08.398504  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.398583  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.398918  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.398937  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	W0826 10:48:08.399054  107298 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0826 10:48:08.419743  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.419767  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.420068  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.420092  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.420109  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.626883  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0826 10:48:09.092135  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:09.092699  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:09.353858  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:09.353893  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:09.865671  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:09.865904  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:09.893284  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.729205013s)
	I0826 10:48:09.893357  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:09.893376  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:09.893409  107298 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.496687736s)
	I0826 10:48:09.893542  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.266617617s)
	I0826 10:48:09.893647  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:09.893665  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:09.893748  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:09.893805  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:09.893812  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:09.893878  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:09.893906  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:09.893929  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:09.893945  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:09.893949  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:09.893959  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:09.893967  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:09.894194  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:09.894231  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:09.894240  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:09.895321  107298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0826 10:48:09.895964  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:09.895985  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:09.896002  107298 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-530639"
	I0826 10:48:09.898209  107298 out.go:177] * Verifying csi-hostpath-driver addon...
	I0826 10:48:09.898211  107298 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0826 10:48:09.900395  107298 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0826 10:48:09.900422  107298 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0826 10:48:09.901247  107298 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0826 10:48:09.942978  107298 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0826 10:48:09.943008  107298 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0826 10:48:09.961820  107298 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0826 10:48:09.961858  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:10.053431  107298 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0826 10:48:10.053463  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0826 10:48:10.139071  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0826 10:48:10.334087  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:10.337443  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:10.406869  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:10.837000  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:10.837256  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:10.880507  107298 pod_ready.go:103] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:10.908643  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:11.342604  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:11.343921  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:11.424179  107298 pod_ready.go:98] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:48:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.11 HostIPs:[{IP:192.168.39.
11}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-26 10:47:59 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-26 10:48:04 +0000 UTC,FinishedAt:2024-08-26 10:48:09 +0000 UTC,ContainerID:cri-o://643641a2b69a7f6850a2b135f36ee7d9889dcc21f4248701b2c792b98b143e1a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://643641a2b69a7f6850a2b135f36ee7d9889dcc21f4248701b2c792b98b143e1a Started:0xc001442720 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0016d6780} {Name:kube-api-access-ltfps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0016d6790}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0826 10:48:11.424210  107298 pod_ready.go:82] duration metric: took 9.552411873s for pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace to be "Ready" ...
	E0826 10:48:11.424223  107298 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:48:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.11 HostIPs:[{IP:192.168.39.11}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-26 10:47:59 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-26 10:48:04 +0000 UTC,FinishedAt:2024-08-26 10:48:09 +0000 UTC,ContainerID:cri-o://643641a2b69a7f6850a2b135f36ee7d9889dcc21f4248701b2c792b98b143e1a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://643641a2b69a7f6850a2b135f36ee7d9889dcc21f4248701b2c792b98b143e1a Started:0xc001442720 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0016d6780} {Name:kube-api-access-ltfps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0016d6790}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0826 10:48:11.424236  107298 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wkxkf" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.437882  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:11.545349  107298 pod_ready.go:93] pod "coredns-6f6b679f8f-wkxkf" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.545379  107298 pod_ready.go:82] duration metric: took 121.134263ms for pod "coredns-6f6b679f8f-wkxkf" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.545392  107298 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.616934  107298 pod_ready.go:93] pod "etcd-addons-530639" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.616965  107298 pod_ready.go:82] duration metric: took 71.565501ms for pod "etcd-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.616980  107298 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.693045  107298 pod_ready.go:93] pod "kube-apiserver-addons-530639" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.693088  107298 pod_ready.go:82] duration metric: took 76.097584ms for pod "kube-apiserver-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.693104  107298 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.808330  107298 pod_ready.go:93] pod "kube-controller-manager-addons-530639" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.808357  107298 pod_ready.go:82] duration metric: took 115.243832ms for pod "kube-controller-manager-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.808367  107298 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qbghq" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.820337  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.681215995s)
	I0826 10:48:11.820407  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:11.820424  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:11.820780  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:11.820810  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:11.820822  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:11.820831  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:11.820839  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:11.821111  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:11.821129  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:11.821131  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:11.823210  107298 addons.go:475] Verifying addon gcp-auth=true in "addons-530639"
	I0826 10:48:11.825091  107298 out.go:177] * Verifying gcp-auth addon...
	I0826 10:48:11.827490  107298 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0826 10:48:11.867702  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:11.868106  107298 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0826 10:48:11.868124  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:11.868697  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:11.889037  107298 pod_ready.go:93] pod "kube-proxy-qbghq" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.889068  107298 pod_ready.go:82] duration metric: took 80.693517ms for pod "kube-proxy-qbghq" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.889083  107298 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.959343  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:12.176190  107298 pod_ready.go:93] pod "kube-scheduler-addons-530639" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:12.176217  107298 pod_ready.go:82] duration metric: took 287.12697ms for pod "kube-scheduler-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:12.176228  107298 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:12.332305  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:12.342001  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:12.345308  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:12.434480  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:12.831162  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:12.834442  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:12.834994  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:12.905740  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:13.331759  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:13.334042  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:13.335511  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:13.417010  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:13.831696  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:13.835233  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:13.835834  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:13.907247  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:14.186773  107298 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:14.331374  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:14.334097  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:14.334350  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:14.408092  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:14.831845  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:14.834307  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:14.835201  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:14.906503  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:15.332036  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:15.335123  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:15.335571  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:15.406770  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:15.832221  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:15.835051  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:15.835242  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:15.906007  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:16.334000  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:16.334342  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:16.336529  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:16.407082  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:16.683303  107298 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:16.831490  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:16.840182  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:16.841523  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:16.906660  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:17.331505  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:17.334051  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:17.334466  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:17.405468  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:17.830915  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:17.834241  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:17.834664  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:17.906351  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:18.332632  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:18.334590  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:18.336678  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:18.406414  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:18.833706  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:18.834979  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:18.835501  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:18.905777  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:19.183056  107298 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:19.183084  107298 pod_ready.go:82] duration metric: took 7.006849533s for pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:19.183091  107298 pod_ready.go:39] duration metric: took 17.336002509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 10:48:19.183107  107298 api_server.go:52] waiting for apiserver process to appear ...
	I0826 10:48:19.183160  107298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 10:48:19.199882  107298 api_server.go:72] duration metric: took 19.694713746s to wait for apiserver process to appear ...
	I0826 10:48:19.199918  107298 api_server.go:88] waiting for apiserver healthz status ...
	I0826 10:48:19.199940  107298 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0826 10:48:19.204286  107298 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I0826 10:48:19.205213  107298 api_server.go:141] control plane version: v1.31.0
	I0826 10:48:19.205263  107298 api_server.go:131] duration metric: took 5.336161ms to wait for apiserver health ...
	I0826 10:48:19.205274  107298 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 10:48:19.215199  107298 system_pods.go:59] 18 kube-system pods found
	I0826 10:48:19.215237  107298 system_pods.go:61] "coredns-6f6b679f8f-wkxkf" [22b66a68-1ed8-47c0-98fb-681f0fc08eca] Running
	I0826 10:48:19.215247  107298 system_pods.go:61] "csi-hostpath-attacher-0" [5b08e2d1-6ecc-4500-82c7-1163b840f4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0826 10:48:19.215258  107298 system_pods.go:61] "csi-hostpath-resizer-0" [b3b0e195-ef58-49e3-9bc3-197ea739961f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0826 10:48:19.215269  107298 system_pods.go:61] "csi-hostpathplugin-dqt92" [e5c11c5c-dc5c-4e44-90bd-7fd30cff1ebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0826 10:48:19.215276  107298 system_pods.go:61] "etcd-addons-530639" [083a7cd1-96ca-428a-b150-66940ba38303] Running
	I0826 10:48:19.215287  107298 system_pods.go:61] "kube-apiserver-addons-530639" [33036b21-fd01-4dc2-a607-621408bba9ab] Running
	I0826 10:48:19.215294  107298 system_pods.go:61] "kube-controller-manager-addons-530639" [82b4411c-6afc-4b37-a8b4-c5c859cf55d4] Running
	I0826 10:48:19.215305  107298 system_pods.go:61] "kube-ingress-dns-minikube" [4388a77f-5011-4640-bee8-9dabf8fa9b50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0826 10:48:19.215309  107298 system_pods.go:61] "kube-proxy-qbghq" [041a740f-019e-4b5a-b615-018af363dbb1] Running
	I0826 10:48:19.215314  107298 system_pods.go:61] "kube-scheduler-addons-530639" [f4364302-4a0a-450f-90b4-b0938fc5ee65] Running
	I0826 10:48:19.215320  107298 system_pods.go:61] "metrics-server-8988944d9-jrwr8" [9e91fb1a-4430-468c-81e7-4017deff1c3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 10:48:19.215324  107298 system_pods.go:61] "nvidia-device-plugin-daemonset-dwxvz" [ec199bca-5011-4285-b91f-ad5994dfe228] Running
	I0826 10:48:19.215330  107298 system_pods.go:61] "registry-6fb4cdfc84-22wjc" [32d6b7ea-5422-4b4d-a7fe-209b1fae6bb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0826 10:48:19.215337  107298 system_pods.go:61] "registry-proxy-vmr7f" [b4617f2b-ddb1-47b0-baf2-2418c37ffd7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0826 10:48:19.215345  107298 system_pods.go:61] "snapshot-controller-56fcc65765-4x5ld" [c3ed019a-c3de-4dea-bcd6-48b9d755cbb2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0826 10:48:19.215354  107298 system_pods.go:61] "snapshot-controller-56fcc65765-whvlf" [4b5b9866-9d35-4282-8de5-c1f17deb402d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0826 10:48:19.215360  107298 system_pods.go:61] "storage-provisioner" [1241b73f-229a-41df-830b-18467fa1c581] Running
	I0826 10:48:19.215371  107298 system_pods.go:61] "tiller-deploy-b48cc5f79-rr874" [a5ad8512-3f72-43be-a53c-23106bcd3367] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0826 10:48:19.215384  107298 system_pods.go:74] duration metric: took 10.102347ms to wait for pod list to return data ...
	I0826 10:48:19.215399  107298 default_sa.go:34] waiting for default service account to be created ...
	I0826 10:48:19.218267  107298 default_sa.go:45] found service account: "default"
	I0826 10:48:19.218295  107298 default_sa.go:55] duration metric: took 2.886012ms for default service account to be created ...
	I0826 10:48:19.218304  107298 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 10:48:19.227556  107298 system_pods.go:86] 18 kube-system pods found
	I0826 10:48:19.227591  107298 system_pods.go:89] "coredns-6f6b679f8f-wkxkf" [22b66a68-1ed8-47c0-98fb-681f0fc08eca] Running
	I0826 10:48:19.227601  107298 system_pods.go:89] "csi-hostpath-attacher-0" [5b08e2d1-6ecc-4500-82c7-1163b840f4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0826 10:48:19.227607  107298 system_pods.go:89] "csi-hostpath-resizer-0" [b3b0e195-ef58-49e3-9bc3-197ea739961f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0826 10:48:19.227616  107298 system_pods.go:89] "csi-hostpathplugin-dqt92" [e5c11c5c-dc5c-4e44-90bd-7fd30cff1ebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0826 10:48:19.227621  107298 system_pods.go:89] "etcd-addons-530639" [083a7cd1-96ca-428a-b150-66940ba38303] Running
	I0826 10:48:19.227625  107298 system_pods.go:89] "kube-apiserver-addons-530639" [33036b21-fd01-4dc2-a607-621408bba9ab] Running
	I0826 10:48:19.227629  107298 system_pods.go:89] "kube-controller-manager-addons-530639" [82b4411c-6afc-4b37-a8b4-c5c859cf55d4] Running
	I0826 10:48:19.227638  107298 system_pods.go:89] "kube-ingress-dns-minikube" [4388a77f-5011-4640-bee8-9dabf8fa9b50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0826 10:48:19.227644  107298 system_pods.go:89] "kube-proxy-qbghq" [041a740f-019e-4b5a-b615-018af363dbb1] Running
	I0826 10:48:19.227649  107298 system_pods.go:89] "kube-scheduler-addons-530639" [f4364302-4a0a-450f-90b4-b0938fc5ee65] Running
	I0826 10:48:19.227659  107298 system_pods.go:89] "metrics-server-8988944d9-jrwr8" [9e91fb1a-4430-468c-81e7-4017deff1c3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 10:48:19.227665  107298 system_pods.go:89] "nvidia-device-plugin-daemonset-dwxvz" [ec199bca-5011-4285-b91f-ad5994dfe228] Running
	I0826 10:48:19.227671  107298 system_pods.go:89] "registry-6fb4cdfc84-22wjc" [32d6b7ea-5422-4b4d-a7fe-209b1fae6bb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0826 10:48:19.227680  107298 system_pods.go:89] "registry-proxy-vmr7f" [b4617f2b-ddb1-47b0-baf2-2418c37ffd7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0826 10:48:19.227690  107298 system_pods.go:89] "snapshot-controller-56fcc65765-4x5ld" [c3ed019a-c3de-4dea-bcd6-48b9d755cbb2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0826 10:48:19.227698  107298 system_pods.go:89] "snapshot-controller-56fcc65765-whvlf" [4b5b9866-9d35-4282-8de5-c1f17deb402d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0826 10:48:19.227703  107298 system_pods.go:89] "storage-provisioner" [1241b73f-229a-41df-830b-18467fa1c581] Running
	I0826 10:48:19.227708  107298 system_pods.go:89] "tiller-deploy-b48cc5f79-rr874" [a5ad8512-3f72-43be-a53c-23106bcd3367] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0826 10:48:19.227717  107298 system_pods.go:126] duration metric: took 9.407266ms to wait for k8s-apps to be running ...
	I0826 10:48:19.227727  107298 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 10:48:19.227783  107298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 10:48:19.243008  107298 system_svc.go:56] duration metric: took 15.266444ms WaitForService to wait for kubelet
	I0826 10:48:19.243047  107298 kubeadm.go:582] duration metric: took 19.73788638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 10:48:19.243071  107298 node_conditions.go:102] verifying NodePressure condition ...
	I0826 10:48:19.247386  107298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 10:48:19.247415  107298 node_conditions.go:123] node cpu capacity is 2
	I0826 10:48:19.247443  107298 node_conditions.go:105] duration metric: took 4.367236ms to run NodePressure ...
	I0826 10:48:19.247457  107298 start.go:241] waiting for startup goroutines ...
	I0826 10:48:19.331566  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:19.333731  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:19.334177  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:19.406009  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:19.832849  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:19.834601  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:19.836920  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:19.906989  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:20.332290  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:20.336609  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:20.336643  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:20.407021  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:20.831647  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:20.838542  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:20.838643  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:20.906521  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:21.333182  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:21.340154  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:21.343100  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:21.407728  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:21.831300  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:21.834920  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:21.835963  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:21.906708  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:22.331436  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:22.335245  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:22.335398  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:22.406195  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:22.832391  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:22.839473  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:22.840457  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:22.909564  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:23.331747  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:23.334013  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:23.334412  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:23.542973  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:23.832582  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:23.835493  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:23.836458  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:23.906109  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:24.334607  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:24.335860  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:24.336623  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:24.406530  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:24.831594  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:24.835189  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:24.835447  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:24.905521  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:25.331010  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:25.334500  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:25.335146  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:25.408118  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:25.830999  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:25.833198  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:25.834096  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:25.905505  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:26.331315  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:26.334482  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:26.334528  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:26.406446  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:26.832908  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:26.835441  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:26.835824  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:26.907010  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:27.331966  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:27.334906  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:27.335676  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:27.406060  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:27.835148  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:27.835309  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:27.835577  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:27.906413  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:28.331377  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:28.334554  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:28.334795  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:28.406321  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:28.830975  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:28.833256  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:28.835018  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:28.908116  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:29.330919  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:29.333488  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:29.337487  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:29.407949  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:29.831891  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:29.833834  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:29.834519  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:29.906330  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:30.331438  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:30.334410  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:30.334717  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:30.406938  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:30.831988  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:30.835620  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:30.835810  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:30.906442  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:31.331285  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:31.334226  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:31.334764  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:31.407026  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:31.832038  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:31.834371  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:31.835041  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:31.906040  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:32.332116  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:32.334119  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:32.334636  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:32.405977  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:32.832481  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:32.834548  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:32.835607  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:32.906369  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:33.352800  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:33.352894  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:33.353401  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:33.579162  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:33.833470  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:33.835496  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:33.835897  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:33.906136  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:34.331914  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:34.334939  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:34.335588  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:34.406083  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:34.832691  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:34.834540  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:34.834921  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:34.906507  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:35.332280  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:35.336685  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:35.337560  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:35.407020  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:35.834097  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:35.835019  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:35.835149  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:35.907118  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:36.334675  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:36.335095  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:36.336063  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:36.405863  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:36.832544  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:36.836553  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:36.836907  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:36.905773  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:37.332209  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:37.335078  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:37.336218  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:37.406551  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:37.831261  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:37.834263  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:37.835205  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:37.906266  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:38.333248  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:38.335889  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:38.336524  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:38.406425  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:38.831645  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:38.834895  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:38.835516  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:38.933600  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:39.331190  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:39.333954  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:39.334038  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:39.405831  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:39.831033  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:39.833325  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:39.834815  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:39.906182  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:40.331364  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:40.336530  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:40.336762  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:40.407104  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:40.830494  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:40.835359  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:40.835442  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:40.906611  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:41.331043  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:41.333726  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:41.334373  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:41.405957  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:41.832912  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:41.834862  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:41.835280  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:41.905386  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:42.331694  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:42.337206  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:42.337255  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:42.406196  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:42.831800  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:42.834289  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:42.834564  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:42.906171  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:43.331678  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:43.334792  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:43.335246  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:43.436081  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:43.831975  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:43.838692  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:43.839149  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:43.912196  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:44.332034  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:44.334323  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:44.334471  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:44.406371  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:44.831495  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:44.834868  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:44.834894  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:44.906333  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:45.332097  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:45.333506  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:45.335371  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:45.406703  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:45.831229  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:45.834340  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:45.835335  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:45.907188  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:46.331562  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:46.335190  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:46.335488  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:46.405824  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:46.832167  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:46.834605  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:46.835706  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:46.906344  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:47.331017  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:47.335717  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:47.336106  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:47.406324  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:47.831330  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:47.833529  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:47.835156  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:47.905508  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:48.331490  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:48.334321  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:48.334546  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:48.406284  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:48.961937  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:48.962506  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:48.962904  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:48.963410  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:49.331613  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:49.333937  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:49.335175  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:49.405825  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:49.831915  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:49.835112  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:49.836530  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:49.906098  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:50.331538  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:50.337336  107298 kapi.go:107] duration metric: took 42.007017907s to wait for kubernetes.io/minikube-addons=registry ...
	I0826 10:48:50.337393  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:50.407088  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:50.831547  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:50.834419  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:50.906138  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:51.331047  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:51.334074  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:51.405579  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:51.831569  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:51.834699  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:51.907515  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:52.333622  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:52.335791  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:52.435626  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:52.831742  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:52.835502  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:52.906320  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:53.332208  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:53.336841  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:53.409561  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:53.831432  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:53.838429  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:53.905534  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:54.331843  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:54.334717  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:54.406417  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:55.041771  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:55.042675  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:55.043144  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:55.333373  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:55.337752  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:55.437386  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:55.831558  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:55.834545  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:55.906131  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:56.331120  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:56.338380  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:56.407623  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:56.833014  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:56.836180  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:56.906644  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:57.331073  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:57.334613  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:57.406453  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:57.831317  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:57.834375  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:57.905944  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:58.332386  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:58.335066  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:58.406240  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:58.831794  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:58.834297  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:58.906103  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:59.331533  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:59.334204  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:59.406869  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:59.830823  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:59.834516  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:59.905763  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:00.331601  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:00.334387  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:00.406106  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:00.831255  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:00.835117  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:00.907114  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:01.331810  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:01.334394  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:01.406240  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:01.831783  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:01.834506  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:01.906434  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:02.332188  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:02.334724  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:02.407694  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:02.831323  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:02.834559  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:02.905920  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:03.332053  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:03.335037  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:03.406463  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:03.831525  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:03.833994  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:03.906684  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:04.331225  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:04.335071  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:04.406294  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:04.831025  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:04.834335  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:04.906188  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:05.331106  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:05.334802  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:05.407108  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:05.852185  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:05.944317  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:05.944413  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:06.330901  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:06.334237  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:06.405455  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:06.831241  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:06.834339  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:06.905505  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:07.330968  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:07.334410  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:07.406723  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:07.833019  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:07.835745  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:07.907398  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:08.331670  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:08.334918  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:08.406045  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:08.842419  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:08.844686  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:08.906570  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:09.332380  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:09.335241  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:09.405818  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:09.832263  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:09.835137  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:09.934532  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:10.332036  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:10.334412  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:10.406308  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:10.831939  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:10.836360  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:11.396630  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:11.396833  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:11.397531  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:11.410122  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:11.831120  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:11.834765  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:11.906853  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:12.331701  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:12.335277  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:12.405751  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:12.831126  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:12.834746  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:12.906237  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:13.332004  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:13.437080  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:13.437250  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:13.832326  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:13.834284  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:13.921527  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:14.333782  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:14.336731  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:14.407254  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:14.833692  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:14.839153  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:14.906402  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:15.333236  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:15.336121  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:15.405620  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:15.831992  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:15.835724  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:15.912050  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:16.332519  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:16.342398  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:16.407498  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:16.831775  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:16.836671  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:16.906376  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:17.331446  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:17.334923  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:17.405595  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:17.831029  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:17.834357  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:17.913955  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:18.501260  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:18.501489  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:18.501850  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:18.832007  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:18.834526  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:18.907468  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:19.331943  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:19.336074  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:19.408218  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:19.831848  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:19.836042  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:19.906280  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:20.331231  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:20.334876  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:20.406341  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:20.878426  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:20.878825  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:20.998874  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:21.331203  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:21.334220  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:21.405454  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:21.831320  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:21.834889  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:21.906598  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:22.331299  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:22.333953  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:22.405546  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:22.837014  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:22.837818  107298 kapi.go:107] duration metric: took 1m14.507073831s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0826 10:49:22.906673  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:23.330962  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:23.406112  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:23.832414  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:23.933956  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:24.332355  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:24.407375  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:24.832379  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:24.906132  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:25.330592  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:25.406926  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:25.832522  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:25.906726  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:26.331918  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:26.406042  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:26.832414  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:26.907628  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:27.331597  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:27.405677  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:27.831504  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:27.910799  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:28.332815  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:28.435040  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:28.835299  107298 kapi.go:107] duration metric: took 1m17.007809993s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0826 10:49:28.836778  107298 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-530639 cluster.
	I0826 10:49:28.837955  107298 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0826 10:49:28.839307  107298 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0826 10:49:28.935418  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:29.405952  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:29.906949  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:30.405914  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:30.907336  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:31.406364  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:31.906023  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:32.406317  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:32.906873  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:33.407125  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:33.906462  107298 kapi.go:107] duration metric: took 1m24.005211968s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0826 10:49:33.908602  107298 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, yakd, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0826 10:49:33.909939  107298 addons.go:510] duration metric: took 1m34.404745333s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget yakd helm-tiller storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0826 10:49:33.909983  107298 start.go:246] waiting for cluster config update ...
	I0826 10:49:33.910008  107298 start.go:255] writing updated cluster config ...
	I0826 10:49:33.910295  107298 ssh_runner.go:195] Run: rm -f paused
	I0826 10:49:33.969647  107298 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 10:49:33.971684  107298 out.go:177] * Done! kubectl is now configured to use "addons-530639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.204829269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669614204803081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fed23a81-e99d-4a47-92f5-b047987442e1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.205644572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d81616b-a04e-4dec-9a04-a48c30e65da6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.205724333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d81616b-a04e-4dec-9a04-a48c30e65da6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.206026134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:133475e45b395768f9a86b5d9d29ef8b4f94c30cbbd9ece7d5b0af9ea2a075fb,PodSandboxId:7c61481d4c53c9da981fc68c1dff0f056b9b817cf9d5b0242f755608bf72e722,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724669607306896321,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s42mb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e580bcd-e483-4db3-b57b-59290cd40f30,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdc81d3d54b853f3c029bf35234844d6b28e3f0dd7518737d6b932f80bb514b,PodSandboxId:ea9b34094dd38b38d39f907813892e32fda00a15dc90e8345a2e89f5b55168dc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724669466087738467,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3ca2a5e47b8b4aa27ee321a368430538f1e7a10cc745764285f325ef61f326,PodSandboxId:96985e5c2c9fb2cd56c7d456d8b81875deba2f4cb158c03bb669d118fcbdcad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724669377641360105,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 569a0aa8-0b7f-48e8-9
857-7b842118128d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a26daa88ccdcc99d5bf4aaf7246f59c7ba0064b1ae57ade9a8abd3a34e88b,PodSandboxId:5273804aac3e16135a940695d62bb2ad55223f4b73b64a765cc3091648eb6ef7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724669347616612804,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6wfwb,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 429d12fd-8040-4e08-a869-59f7efd36b43,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17451c8fb4826e6ecc8fa27a9992d885684c9190bbd3e3ef2048ec3dc2efd37,PodSandboxId:0280c8d1d116b4db825e39eecd77df04e14cd0e57a30b88f9719f3b959ef9614,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724669347462928095,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cnptb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7a5169a-05ee-4455-8383-51444d52d948,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca,PodSandboxId:578fa2369817bed956940489ec2e905738179bd65a6654708b5e6dd8445b5080,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724669332230613970,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-8988944d9-jrwr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e91fb1a-4430-468c-81e7-4017deff1c3c,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3,PodSandboxId:da592225294749f79c393e55503908fe7866a419b4c2d82c21be80ee7c822a92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724669286788747172,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1241b73f-229a-41df-830b-18467fa1c581,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d,PodSandboxId:87f28fd3acff778798fac5002ccd0ae6057fb42566ec116d781c9f8d399d547f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724669284077620158,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wkxkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b66a68-1ed8-47c0-98fb-681f0fc08eca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413,PodSandboxId:0ae7cb019e3d91eb094ed590f8d46da77e059e58fdcdec68c62efc505dfcf173,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724669281751395254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbghq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041a740f-019e-4b5a-b615-018af363dbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5,PodSandboxId:d6aaa3a2860076119e487c1765f43180b7f146f7d06ca9b66057e0614995b19e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724669269105052961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc5c75d55afd25cbf49f8c9c1515e02,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9,PodSandboxId:4bb95cb3cc16ea4224d1fbfd35500ce12bc9a1be9d36ef3b1ee5f50b75a6b5b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724669269080485351,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903aeb6456cc069c62974b42d8088a75,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283,PodSandboxId:324e6d2c78486f5ac780a357871fdcdbd206f3e28c1c4a3d2fffb8120a14e964,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa82
3d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724669269084405064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353ccf56fead8c783c0da330f049c6f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff,PodSandboxId:532ee159b1e2e85e95238bebbb451bf905edde72871b281799df73cc610dfa5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f
2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724669268878481058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73c5d9ce1def0f6be0c13d9d869a4e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d81616b-a04e-4dec-9a04-a48c30e65da6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.244108929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afc123c7-95da-44c9-aa0d-2ea78dd151fc name=/runtime.v1.RuntimeService/Version
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.244231918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afc123c7-95da-44c9-aa0d-2ea78dd151fc name=/runtime.v1.RuntimeService/Version
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.245592276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2f4e754-8d53-4935-968b-f55f64228522 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.246863895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669614246834282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2f4e754-8d53-4935-968b-f55f64228522 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.247422644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dc848ac-4c7f-431e-9dee-d91afa4ded8a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.247492960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dc848ac-4c7f-431e-9dee-d91afa4ded8a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.247775999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:133475e45b395768f9a86b5d9d29ef8b4f94c30cbbd9ece7d5b0af9ea2a075fb,PodSandboxId:7c61481d4c53c9da981fc68c1dff0f056b9b817cf9d5b0242f755608bf72e722,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724669607306896321,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s42mb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e580bcd-e483-4db3-b57b-59290cd40f30,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdc81d3d54b853f3c029bf35234844d6b28e3f0dd7518737d6b932f80bb514b,PodSandboxId:ea9b34094dd38b38d39f907813892e32fda00a15dc90e8345a2e89f5b55168dc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724669466087738467,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3ca2a5e47b8b4aa27ee321a368430538f1e7a10cc745764285f325ef61f326,PodSandboxId:96985e5c2c9fb2cd56c7d456d8b81875deba2f4cb158c03bb669d118fcbdcad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724669377641360105,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 569a0aa8-0b7f-48e8-9
857-7b842118128d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a26daa88ccdcc99d5bf4aaf7246f59c7ba0064b1ae57ade9a8abd3a34e88b,PodSandboxId:5273804aac3e16135a940695d62bb2ad55223f4b73b64a765cc3091648eb6ef7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724669347616612804,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6wfwb,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 429d12fd-8040-4e08-a869-59f7efd36b43,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17451c8fb4826e6ecc8fa27a9992d885684c9190bbd3e3ef2048ec3dc2efd37,PodSandboxId:0280c8d1d116b4db825e39eecd77df04e14cd0e57a30b88f9719f3b959ef9614,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724669347462928095,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cnptb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7a5169a-05ee-4455-8383-51444d52d948,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca,PodSandboxId:578fa2369817bed956940489ec2e905738179bd65a6654708b5e6dd8445b5080,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724669332230613970,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-8988944d9-jrwr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e91fb1a-4430-468c-81e7-4017deff1c3c,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3,PodSandboxId:da592225294749f79c393e55503908fe7866a419b4c2d82c21be80ee7c822a92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724669286788747172,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1241b73f-229a-41df-830b-18467fa1c581,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d,PodSandboxId:87f28fd3acff778798fac5002ccd0ae6057fb42566ec116d781c9f8d399d547f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724669284077620158,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wkxkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b66a68-1ed8-47c0-98fb-681f0fc08eca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413,PodSandboxId:0ae7cb019e3d91eb094ed590f8d46da77e059e58fdcdec68c62efc505dfcf173,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724669281751395254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbghq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041a740f-019e-4b5a-b615-018af363dbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5,PodSandboxId:d6aaa3a2860076119e487c1765f43180b7f146f7d06ca9b66057e0614995b19e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724669269105052961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc5c75d55afd25cbf49f8c9c1515e02,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9,PodSandboxId:4bb95cb3cc16ea4224d1fbfd35500ce12bc9a1be9d36ef3b1ee5f50b75a6b5b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724669269080485351,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903aeb6456cc069c62974b42d8088a75,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283,PodSandboxId:324e6d2c78486f5ac780a357871fdcdbd206f3e28c1c4a3d2fffb8120a14e964,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa82
3d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724669269084405064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353ccf56fead8c783c0da330f049c6f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff,PodSandboxId:532ee159b1e2e85e95238bebbb451bf905edde72871b281799df73cc610dfa5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f
2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724669268878481058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73c5d9ce1def0f6be0c13d9d869a4e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2dc848ac-4c7f-431e-9dee-d91afa4ded8a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.285035408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb10994f-517d-47b1-b7ef-41cb28d72cc4 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.285154960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb10994f-517d-47b1-b7ef-41cb28d72cc4 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.286862749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c86bef46-d690-4606-8452-e6aa57e1ac3b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.288230010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669614288162417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c86bef46-d690-4606-8452-e6aa57e1ac3b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.288747838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dea8dee6-02b1-4ed9-85e2-905b017036ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.288805042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dea8dee6-02b1-4ed9-85e2-905b017036ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.289302272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:133475e45b395768f9a86b5d9d29ef8b4f94c30cbbd9ece7d5b0af9ea2a075fb,PodSandboxId:7c61481d4c53c9da981fc68c1dff0f056b9b817cf9d5b0242f755608bf72e722,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724669607306896321,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s42mb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e580bcd-e483-4db3-b57b-59290cd40f30,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdc81d3d54b853f3c029bf35234844d6b28e3f0dd7518737d6b932f80bb514b,PodSandboxId:ea9b34094dd38b38d39f907813892e32fda00a15dc90e8345a2e89f5b55168dc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724669466087738467,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3ca2a5e47b8b4aa27ee321a368430538f1e7a10cc745764285f325ef61f326,PodSandboxId:96985e5c2c9fb2cd56c7d456d8b81875deba2f4cb158c03bb669d118fcbdcad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724669377641360105,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 569a0aa8-0b7f-48e8-9
857-7b842118128d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a26daa88ccdcc99d5bf4aaf7246f59c7ba0064b1ae57ade9a8abd3a34e88b,PodSandboxId:5273804aac3e16135a940695d62bb2ad55223f4b73b64a765cc3091648eb6ef7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724669347616612804,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6wfwb,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 429d12fd-8040-4e08-a869-59f7efd36b43,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17451c8fb4826e6ecc8fa27a9992d885684c9190bbd3e3ef2048ec3dc2efd37,PodSandboxId:0280c8d1d116b4db825e39eecd77df04e14cd0e57a30b88f9719f3b959ef9614,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724669347462928095,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cnptb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7a5169a-05ee-4455-8383-51444d52d948,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca,PodSandboxId:578fa2369817bed956940489ec2e905738179bd65a6654708b5e6dd8445b5080,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724669332230613970,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-8988944d9-jrwr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e91fb1a-4430-468c-81e7-4017deff1c3c,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3,PodSandboxId:da592225294749f79c393e55503908fe7866a419b4c2d82c21be80ee7c822a92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724669286788747172,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1241b73f-229a-41df-830b-18467fa1c581,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d,PodSandboxId:87f28fd3acff778798fac5002ccd0ae6057fb42566ec116d781c9f8d399d547f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724669284077620158,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wkxkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b66a68-1ed8-47c0-98fb-681f0fc08eca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413,PodSandboxId:0ae7cb019e3d91eb094ed590f8d46da77e059e58fdcdec68c62efc505dfcf173,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724669281751395254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbghq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041a740f-019e-4b5a-b615-018af363dbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5,PodSandboxId:d6aaa3a2860076119e487c1765f43180b7f146f7d06ca9b66057e0614995b19e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724669269105052961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc5c75d55afd25cbf49f8c9c1515e02,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9,PodSandboxId:4bb95cb3cc16ea4224d1fbfd35500ce12bc9a1be9d36ef3b1ee5f50b75a6b5b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724669269080485351,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903aeb6456cc069c62974b42d8088a75,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283,PodSandboxId:324e6d2c78486f5ac780a357871fdcdbd206f3e28c1c4a3d2fffb8120a14e964,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa82
3d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724669269084405064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353ccf56fead8c783c0da330f049c6f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff,PodSandboxId:532ee159b1e2e85e95238bebbb451bf905edde72871b281799df73cc610dfa5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f
2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724669268878481058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73c5d9ce1def0f6be0c13d9d869a4e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dea8dee6-02b1-4ed9-85e2-905b017036ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.329256835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1982506-742c-4f0a-bafa-9df22d87bbc1 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.329367422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1982506-742c-4f0a-bafa-9df22d87bbc1 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.331166216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b754776-83f1-458c-a804-cc0777647907 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.333441405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669614333402284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b754776-83f1-458c-a804-cc0777647907 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.334241497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcc884df-7b6c-447a-8ca3-03745223a4ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.334404468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcc884df-7b6c-447a-8ca3-03745223a4ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:53:34 addons-530639 crio[683]: time="2024-08-26 10:53:34.334886731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:133475e45b395768f9a86b5d9d29ef8b4f94c30cbbd9ece7d5b0af9ea2a075fb,PodSandboxId:7c61481d4c53c9da981fc68c1dff0f056b9b817cf9d5b0242f755608bf72e722,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724669607306896321,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s42mb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e580bcd-e483-4db3-b57b-59290cd40f30,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdc81d3d54b853f3c029bf35234844d6b28e3f0dd7518737d6b932f80bb514b,PodSandboxId:ea9b34094dd38b38d39f907813892e32fda00a15dc90e8345a2e89f5b55168dc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724669466087738467,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3ca2a5e47b8b4aa27ee321a368430538f1e7a10cc745764285f325ef61f326,PodSandboxId:96985e5c2c9fb2cd56c7d456d8b81875deba2f4cb158c03bb669d118fcbdcad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724669377641360105,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 569a0aa8-0b7f-48e8-9
857-7b842118128d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a26daa88ccdcc99d5bf4aaf7246f59c7ba0064b1ae57ade9a8abd3a34e88b,PodSandboxId:5273804aac3e16135a940695d62bb2ad55223f4b73b64a765cc3091648eb6ef7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724669347616612804,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6wfwb,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 429d12fd-8040-4e08-a869-59f7efd36b43,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17451c8fb4826e6ecc8fa27a9992d885684c9190bbd3e3ef2048ec3dc2efd37,PodSandboxId:0280c8d1d116b4db825e39eecd77df04e14cd0e57a30b88f9719f3b959ef9614,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724669347462928095,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cnptb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7a5169a-05ee-4455-8383-51444d52d948,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca,PodSandboxId:578fa2369817bed956940489ec2e905738179bd65a6654708b5e6dd8445b5080,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724669332230613970,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-8988944d9-jrwr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e91fb1a-4430-468c-81e7-4017deff1c3c,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3,PodSandboxId:da592225294749f79c393e55503908fe7866a419b4c2d82c21be80ee7c822a92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724669286788747172,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1241b73f-229a-41df-830b-18467fa1c581,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d,PodSandboxId:87f28fd3acff778798fac5002ccd0ae6057fb42566ec116d781c9f8d399d547f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724669284077620158,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wkxkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b66a68-1ed8-47c0-98fb-681f0fc08eca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413,PodSandboxId:0ae7cb019e3d91eb094ed590f8d46da77e059e58fdcdec68c62efc505dfcf173,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724669281751395254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbghq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041a740f-019e-4b5a-b615-018af363dbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5,PodSandboxId:d6aaa3a2860076119e487c1765f43180b7f146f7d06ca9b66057e0614995b19e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724669269105052961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc5c75d55afd25cbf49f8c9c1515e02,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9,PodSandboxId:4bb95cb3cc16ea4224d1fbfd35500ce12bc9a1be9d36ef3b1ee5f50b75a6b5b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724669269080485351,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903aeb6456cc069c62974b42d8088a75,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283,PodSandboxId:324e6d2c78486f5ac780a357871fdcdbd206f3e28c1c4a3d2fffb8120a14e964,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa82
3d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724669269084405064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353ccf56fead8c783c0da330f049c6f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff,PodSandboxId:532ee159b1e2e85e95238bebbb451bf905edde72871b281799df73cc610dfa5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f
2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724669268878481058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73c5d9ce1def0f6be0c13d9d869a4e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcc884df-7b6c-447a-8ca3-03745223a4ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	133475e45b395       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   7c61481d4c53c       hello-world-app-55bf9c44b4-s42mb
	bfdc81d3d54b8       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   ea9b34094dd38       nginx
	be3ca2a5e47b8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   96985e5c2c9fb       busybox
	f60a26daa88cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   4 minutes ago       Exited              patch                     0                   5273804aac3e1       ingress-nginx-admission-patch-6wfwb
	b17451c8fb482       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   4 minutes ago       Exited              create                    0                   0280c8d1d116b       ingress-nginx-admission-create-cnptb
	121bffb9cc142       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   578fa2369817b       metrics-server-8988944d9-jrwr8
	5df9b3d6329be       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   da59222529474       storage-provisioner
	87dd1ca50a348       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   87f28fd3acff7       coredns-6f6b679f8f-wkxkf
	f706c8457e5a4       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   0ae7cb019e3d9       kube-proxy-qbghq
	850d60ba14a0e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   d6aaa3a286007       kube-controller-manager-addons-530639
	dbc3017a5018f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   324e6d2c78486       kube-apiserver-addons-530639
	c7ef02e3f3f47       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   4bb95cb3cc16e       etcd-addons-530639
	cf0248ce67564       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   532ee159b1e2e       kube-scheduler-addons-530639
	
	
	==> coredns [87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d] <==
	[INFO] 10.244.0.6:55852 - 11952 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000164892s
	[INFO] 10.244.0.6:34625 - 29309 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115511s
	[INFO] 10.244.0.6:34625 - 12415 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109533s
	[INFO] 10.244.0.6:48051 - 33099 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086298s
	[INFO] 10.244.0.6:48051 - 6580 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000172189s
	[INFO] 10.244.0.6:42931 - 51868 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001587s
	[INFO] 10.244.0.6:42931 - 49050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000343912s
	[INFO] 10.244.0.6:56078 - 36944 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121876s
	[INFO] 10.244.0.6:56078 - 19309 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179768s
	[INFO] 10.244.0.6:56880 - 17373 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006441s
	[INFO] 10.244.0.6:56880 - 41690 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086919s
	[INFO] 10.244.0.6:54257 - 57543 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036253s
	[INFO] 10.244.0.6:54257 - 63429 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085994s
	[INFO] 10.244.0.6:37482 - 44331 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005419s
	[INFO] 10.244.0.6:37482 - 5160 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000083837s
	[INFO] 10.244.0.22:56868 - 42487 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000496744s
	[INFO] 10.244.0.22:51782 - 33332 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000104268s
	[INFO] 10.244.0.22:52598 - 48035 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136635s
	[INFO] 10.244.0.22:36639 - 25382 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157254s
	[INFO] 10.244.0.22:58956 - 16134 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000181509s
	[INFO] 10.244.0.22:48044 - 1700 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000177677s
	[INFO] 10.244.0.22:48615 - 29917 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000974955s
	[INFO] 10.244.0.22:48073 - 30258 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000530425s
	[INFO] 10.244.0.26:37839 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000559466s
	[INFO] 10.244.0.26:50142 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153453s
	
	
	==> describe nodes <==
	Name:               addons-530639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-530639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=addons-530639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T10_47_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-530639
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 10:47:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-530639
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 10:53:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 10:51:29 +0000   Mon, 26 Aug 2024 10:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 10:51:29 +0000   Mon, 26 Aug 2024 10:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 10:51:29 +0000   Mon, 26 Aug 2024 10:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 10:51:29 +0000   Mon, 26 Aug 2024 10:47:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    addons-530639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 18660512e3354c8b94a86e929f9b1e5f
	  System UUID:                18660512-e335-4c8b-94a8-6e929f9b1e5f
	  Boot ID:                    0105ed9d-b779-4196-ba39-b27baf284166
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  default                     hello-world-app-55bf9c44b4-s42mb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 coredns-6f6b679f8f-wkxkf                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m35s
	  kube-system                 etcd-addons-530639                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m40s
	  kube-system                 kube-apiserver-addons-530639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-controller-manager-addons-530639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-qbghq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-scheduler-addons-530639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 metrics-server-8988944d9-jrwr8           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m29s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m30s  kube-proxy       
	  Normal  Starting                 5m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m40s  kubelet          Node addons-530639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s  kubelet          Node addons-530639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s  kubelet          Node addons-530639 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m39s  kubelet          Node addons-530639 status is now: NodeReady
	  Normal  RegisteredNode           5m36s  node-controller  Node addons-530639 event: Registered Node addons-530639 in Controller
	
	
	==> dmesg <==
	[Aug26 10:48] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.215658] kauditd_printk_skb: 145 callbacks suppressed
	[  +8.039213] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.179381] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.372386] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.805376] kauditd_printk_skb: 2 callbacks suppressed
	[Aug26 10:49] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.375674] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.151947] kauditd_printk_skb: 44 callbacks suppressed
	[  +8.367540] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.560554] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.170050] kauditd_printk_skb: 48 callbacks suppressed
	[ +13.674326] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.839545] kauditd_printk_skb: 2 callbacks suppressed
	[Aug26 10:50] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.952392] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.125009] kauditd_printk_skb: 49 callbacks suppressed
	[  +7.893956] kauditd_printk_skb: 43 callbacks suppressed
	[  +7.297447] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.127908] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.005216] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.928262] kauditd_printk_skb: 72 callbacks suppressed
	[Aug26 10:51] kauditd_printk_skb: 49 callbacks suppressed
	[Aug26 10:53] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.166493] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9] <==
	{"level":"info","ts":"2024-08-26T10:49:20.863934Z","caller":"traceutil/trace.go:171","msg":"trace[524759163] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1096; }","duration":"179.001135ms","start":"2024-08-26T10:49:20.684922Z","end":"2024-08-26T10:49:20.863923Z","steps":["trace[524759163] 'range keys from in-memory index tree'  (duration: 178.81365ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:49:20.976110Z","caller":"traceutil/trace.go:171","msg":"trace[273128302] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"139.70632ms","start":"2024-08-26T10:49:20.836385Z","end":"2024-08-26T10:49:20.976091Z","steps":["trace[273128302] 'process raft request'  (duration: 139.491341ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:49:20.983355Z","caller":"traceutil/trace.go:171","msg":"trace[1910683933] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"114.30175ms","start":"2024-08-26T10:49:20.869031Z","end":"2024-08-26T10:49:20.983333Z","steps":["trace[1910683933] 'process raft request'  (duration: 113.493673ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:49:31.723378Z","caller":"traceutil/trace.go:171","msg":"trace[82969359] transaction","detail":"{read_only:false; response_revision:1164; number_of_response:1; }","duration":"226.209751ms","start":"2024-08-26T10:49:31.497152Z","end":"2024-08-26T10:49:31.723361Z","steps":["trace[82969359] 'process raft request'  (duration: 225.752448ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:26.925559Z","caller":"traceutil/trace.go:171","msg":"trace[2143165711] linearizableReadLoop","detail":"{readStateIndex:1574; appliedIndex:1573; }","duration":"111.85999ms","start":"2024-08-26T10:50:26.813682Z","end":"2024-08-26T10:50:26.925542Z","steps":["trace[2143165711] 'read index received'  (duration: 111.678539ms)","trace[2143165711] 'applied index is now lower than readState.Index'  (duration: 180.99µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T10:50:26.925664Z","caller":"traceutil/trace.go:171","msg":"trace[717428587] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1521; }","duration":"136.051627ms","start":"2024-08-26T10:50:26.789606Z","end":"2024-08-26T10:50:26.925658Z","steps":["trace[717428587] 'process raft request'  (duration: 135.824777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T10:50:26.925923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.195427ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-26T10:50:26.925972Z","caller":"traceutil/trace.go:171","msg":"trace[1800604281] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1521; }","duration":"112.307538ms","start":"2024-08-26T10:50:26.813656Z","end":"2024-08-26T10:50:26.925964Z","steps":["trace[1800604281] 'agreement among raft nodes before linearized reading'  (duration: 112.172704ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:31.466516Z","caller":"traceutil/trace.go:171","msg":"trace[1925241876] linearizableReadLoop","detail":"{readStateIndex:1612; appliedIndex:1611; }","duration":"280.561173ms","start":"2024-08-26T10:50:31.185931Z","end":"2024-08-26T10:50:31.466492Z","steps":["trace[1925241876] 'read index received'  (duration: 280.413549ms)","trace[1925241876] 'applied index is now lower than readState.Index'  (duration: 147.002µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T10:50:31.466657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.712124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T10:50:31.466675Z","caller":"traceutil/trace.go:171","msg":"trace[1529817644] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1558; }","duration":"280.756023ms","start":"2024-08-26T10:50:31.185914Z","end":"2024-08-26T10:50:31.466670Z","steps":["trace[1529817644] 'agreement among raft nodes before linearized reading'  (duration: 280.6527ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:31.466921Z","caller":"traceutil/trace.go:171","msg":"trace[1623315541] transaction","detail":"{read_only:false; response_revision:1558; number_of_response:1; }","duration":"299.082237ms","start":"2024-08-26T10:50:31.167827Z","end":"2024-08-26T10:50:31.466910Z","steps":["trace[1623315541] 'process raft request'  (duration: 298.581965ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:33.611823Z","caller":"traceutil/trace.go:171","msg":"trace[1173019852] linearizableReadLoop","detail":"{readStateIndex:1615; appliedIndex:1614; }","duration":"130.313717ms","start":"2024-08-26T10:50:33.481495Z","end":"2024-08-26T10:50:33.611808Z","steps":["trace[1173019852] 'read index received'  (duration: 130.120378ms)","trace[1173019852] 'applied index is now lower than readState.Index'  (duration: 192.5µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T10:50:33.611933Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.420322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T10:50:33.611953Z","caller":"traceutil/trace.go:171","msg":"trace[432714119] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1560; }","duration":"130.45795ms","start":"2024-08-26T10:50:33.481490Z","end":"2024-08-26T10:50:33.611948Z","steps":["trace[432714119] 'agreement among raft nodes before linearized reading'  (duration: 130.39853ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:33.720134Z","caller":"traceutil/trace.go:171","msg":"trace[740183580] linearizableReadLoop","detail":"{readStateIndex:1616; appliedIndex:1615; }","duration":"107.119888ms","start":"2024-08-26T10:50:33.613001Z","end":"2024-08-26T10:50:33.720121Z","steps":["trace[740183580] 'read index received'  (duration: 105.131996ms)","trace[740183580] 'applied index is now lower than readState.Index'  (duration: 1.987494ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T10:50:33.720259Z","caller":"traceutil/trace.go:171","msg":"trace[307227972] transaction","detail":"{read_only:false; response_revision:1561; number_of_response:1; }","duration":"107.44277ms","start":"2024-08-26T10:50:33.612803Z","end":"2024-08-26T10:50:33.720245Z","steps":["trace[307227972] 'process raft request'  (duration: 105.43197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T10:50:33.720331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.31431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T10:50:33.720354Z","caller":"traceutil/trace.go:171","msg":"trace[1408992908] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1561; }","duration":"107.352183ms","start":"2024-08-26T10:50:33.612996Z","end":"2024-08-26T10:50:33.720348Z","steps":["trace[1408992908] 'agreement among raft nodes before linearized reading'  (duration: 107.266317ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:50.016616Z","caller":"traceutil/trace.go:171","msg":"trace[448022152] transaction","detail":"{read_only:false; response_revision:1667; number_of_response:1; }","duration":"348.950674ms","start":"2024-08-26T10:50:49.667608Z","end":"2024-08-26T10:50:50.016558Z","steps":["trace[448022152] 'process raft request'  (duration: 348.828505ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T10:50:50.016902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-26T10:50:49.667591Z","time spent":"349.158384ms","remote":"127.0.0.1:46290","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1633 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-08-26T10:50:50.017506Z","caller":"traceutil/trace.go:171","msg":"trace[1108140281] linearizableReadLoop","detail":"{readStateIndex:1727; appliedIndex:1727; }","duration":"204.104825ms","start":"2024-08-26T10:50:49.813391Z","end":"2024-08-26T10:50:50.017496Z","steps":["trace[1108140281] 'read index received'  (duration: 204.10041ms)","trace[1108140281] 'applied index is now lower than readState.Index'  (duration: 3.442µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T10:50:50.017693Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.293445ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-xfr8k\" ","response":"range_response_count:1 size:10376"}
	{"level":"info","ts":"2024-08-26T10:50:50.017718Z","caller":"traceutil/trace.go:171","msg":"trace[1471716803] range","detail":"{range_begin:/registry/pods/gadget/gadget-xfr8k; range_end:; response_count:1; response_revision:1667; }","duration":"204.326554ms","start":"2024-08-26T10:50:49.813386Z","end":"2024-08-26T10:50:50.017713Z","steps":["trace[1471716803] 'agreement among raft nodes before linearized reading'  (duration: 204.151834ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:50.022492Z","caller":"traceutil/trace.go:171","msg":"trace[1105980089] transaction","detail":"{read_only:false; response_revision:1668; number_of_response:1; }","duration":"206.381948ms","start":"2024-08-26T10:50:49.816093Z","end":"2024-08-26T10:50:50.022475Z","steps":["trace[1105980089] 'process raft request'  (duration: 206.311908ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:53:34 up 6 min,  0 users,  load average: 0.19, 0.85, 0.50
	Linux addons-530639 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283] <==
	E0826 10:50:01.787679       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0826 10:50:01.788954       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0826 10:50:01.790406       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.916505ms" method="GET" path="/api/v1/pods" result=null
	I0826 10:50:21.539546       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.219.107"}
	E0826 10:50:33.721846       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0826 10:50:42.251872       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0826 10:50:57.733999       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0826 10:50:58.852809       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0826 10:50:59.242864       1 watch.go:250] "Unhandled Error" err="write tcp 192.168.39.11:8443->10.244.0.17:43406: write: connection reset by peer" logger="UnhandledError"
	I0826 10:51:01.749646       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0826 10:51:01.943459       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.137.75"}
	I0826 10:51:05.883527       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:05.883581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0826 10:51:05.925932       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:05.925977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0826 10:51:05.935635       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:05.935856       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0826 10:51:05.970054       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:05.970724       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0826 10:51:06.025827       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:06.025958       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0826 10:51:06.934630       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0826 10:51:07.027311       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0826 10:51:07.199012       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0826 10:53:24.484691       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.97.205"}
	
	
	==> kube-controller-manager [850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5] <==
	W0826 10:52:17.159120       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:52:17.159249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:52:24.737326       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:52:24.737383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:52:28.546137       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:52:28.546237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:52:29.777550       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:52:29.777608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:53:00.689378       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:53:00.689622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:53:13.441419       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:53:13.441604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:53:16.852101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:53:16.852373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:53:21.329008       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:53:21.329073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0826 10:53:24.301619       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.654586ms"
	I0826 10:53:24.314734       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.012347ms"
	I0826 10:53:24.315432       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="61.69µs"
	I0826 10:53:24.329442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.429µs"
	I0826 10:53:26.330540       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0826 10:53:26.337045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.38µs"
	I0826 10:53:26.344099       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0826 10:53:27.711756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.375813ms"
	I0826 10:53:27.711841       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.383µs"
	
	
	==> kube-proxy [f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 10:48:03.807402       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 10:48:03.837882       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.11"]
	E0826 10:48:03.837957       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 10:48:03.925794       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 10:48:03.925830       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 10:48:03.925856       1 server_linux.go:169] "Using iptables Proxier"
	I0826 10:48:03.932881       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 10:48:03.933093       1 server.go:483] "Version info" version="v1.31.0"
	I0826 10:48:03.933102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 10:48:03.934378       1 config.go:197] "Starting service config controller"
	I0826 10:48:03.934400       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 10:48:03.934434       1 config.go:104] "Starting endpoint slice config controller"
	I0826 10:48:03.934439       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 10:48:03.934870       1 config.go:326] "Starting node config controller"
	I0826 10:48:03.934877       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 10:48:04.035238       1 shared_informer.go:320] Caches are synced for node config
	I0826 10:48:04.035289       1 shared_informer.go:320] Caches are synced for service config
	I0826 10:48:04.035328       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff] <==
	W0826 10:47:51.591515       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 10:47:51.591561       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 10:47:52.476933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0826 10:47:52.477013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.610460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 10:47:52.610603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.640226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 10:47:52.640357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.691788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0826 10:47:52.691923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.773574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 10:47:52.773971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.800991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 10:47:52.801128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.819877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 10:47:52.820026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.851897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0826 10:47:52.852042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.913578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 10:47:52.914170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.922023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 10:47:52.922220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.962778       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 10:47:52.962870       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0826 10:47:55.068252       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 10:53:24 addons-530639 kubelet[1221]: E0826 10:53:24.804325    1221 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669604803866790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585116,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:53:24 addons-530639 kubelet[1221]: E0826 10:53:24.804538    1221 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669604803866790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585116,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:53:25 addons-530639 kubelet[1221]: I0826 10:53:25.449212    1221 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpv29\" (UniqueName: \"kubernetes.io/projected/4388a77f-5011-4640-bee8-9dabf8fa9b50-kube-api-access-fpv29\") pod \"4388a77f-5011-4640-bee8-9dabf8fa9b50\" (UID: \"4388a77f-5011-4640-bee8-9dabf8fa9b50\") "
	Aug 26 10:53:25 addons-530639 kubelet[1221]: I0826 10:53:25.451402    1221 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4388a77f-5011-4640-bee8-9dabf8fa9b50-kube-api-access-fpv29" (OuterVolumeSpecName: "kube-api-access-fpv29") pod "4388a77f-5011-4640-bee8-9dabf8fa9b50" (UID: "4388a77f-5011-4640-bee8-9dabf8fa9b50"). InnerVolumeSpecName "kube-api-access-fpv29". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 26 10:53:25 addons-530639 kubelet[1221]: I0826 10:53:25.550009    1221 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fpv29\" (UniqueName: \"kubernetes.io/projected/4388a77f-5011-4640-bee8-9dabf8fa9b50-kube-api-access-fpv29\") on node \"addons-530639\" DevicePath \"\""
	Aug 26 10:53:25 addons-530639 kubelet[1221]: I0826 10:53:25.666246    1221 scope.go:117] "RemoveContainer" containerID="7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439"
	Aug 26 10:53:25 addons-530639 kubelet[1221]: I0826 10:53:25.701245    1221 scope.go:117] "RemoveContainer" containerID="7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439"
	Aug 26 10:53:25 addons-530639 kubelet[1221]: E0826 10:53:25.702115    1221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439\": container with ID starting with 7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439 not found: ID does not exist" containerID="7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439"
	Aug 26 10:53:25 addons-530639 kubelet[1221]: I0826 10:53:25.702154    1221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439"} err="failed to get container status \"7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439\": rpc error: code = NotFound desc = could not find container \"7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439\": container with ID starting with 7ebc4a2f76144e3c0b5f2d90979da6d9602b315d8d175e7f6d614716995ab439 not found: ID does not exist"
	Aug 26 10:53:26 addons-530639 kubelet[1221]: I0826 10:53:26.553022    1221 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429d12fd-8040-4e08-a869-59f7efd36b43" path="/var/lib/kubelet/pods/429d12fd-8040-4e08-a869-59f7efd36b43/volumes"
	Aug 26 10:53:26 addons-530639 kubelet[1221]: I0826 10:53:26.553503    1221 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4388a77f-5011-4640-bee8-9dabf8fa9b50" path="/var/lib/kubelet/pods/4388a77f-5011-4640-bee8-9dabf8fa9b50/volumes"
	Aug 26 10:53:26 addons-530639 kubelet[1221]: I0826 10:53:26.553863    1221 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a5169a-05ee-4455-8383-51444d52d948" path="/var/lib/kubelet/pods/c7a5169a-05ee-4455-8383-51444d52d948/volumes"
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.582399    1221 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq28l\" (UniqueName: \"kubernetes.io/projected/7e6a5548-05f6-4c71-bae2-4dea4f538b78-kube-api-access-rq28l\") pod \"7e6a5548-05f6-4c71-bae2-4dea4f538b78\" (UID: \"7e6a5548-05f6-4c71-bae2-4dea4f538b78\") "
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.582444    1221 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e6a5548-05f6-4c71-bae2-4dea4f538b78-webhook-cert\") pod \"7e6a5548-05f6-4c71-bae2-4dea4f538b78\" (UID: \"7e6a5548-05f6-4c71-bae2-4dea4f538b78\") "
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.587354    1221 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6a5548-05f6-4c71-bae2-4dea4f538b78-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7e6a5548-05f6-4c71-bae2-4dea4f538b78" (UID: "7e6a5548-05f6-4c71-bae2-4dea4f538b78"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.589437    1221 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6a5548-05f6-4c71-bae2-4dea4f538b78-kube-api-access-rq28l" (OuterVolumeSpecName: "kube-api-access-rq28l") pod "7e6a5548-05f6-4c71-bae2-4dea4f538b78" (UID: "7e6a5548-05f6-4c71-bae2-4dea4f538b78"). InnerVolumeSpecName "kube-api-access-rq28l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.682682    1221 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rq28l\" (UniqueName: \"kubernetes.io/projected/7e6a5548-05f6-4c71-bae2-4dea4f538b78-kube-api-access-rq28l\") on node \"addons-530639\" DevicePath \"\""
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.682757    1221 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e6a5548-05f6-4c71-bae2-4dea4f538b78-webhook-cert\") on node \"addons-530639\" DevicePath \"\""
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.699599    1221 scope.go:117] "RemoveContainer" containerID="9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba"
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.719668    1221 scope.go:117] "RemoveContainer" containerID="9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba"
	Aug 26 10:53:29 addons-530639 kubelet[1221]: E0826 10:53:29.720106    1221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba\": container with ID starting with 9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba not found: ID does not exist" containerID="9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba"
	Aug 26 10:53:29 addons-530639 kubelet[1221]: I0826 10:53:29.720137    1221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba"} err="failed to get container status \"9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba\": rpc error: code = NotFound desc = could not find container \"9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba\": container with ID starting with 9de52273800b876f7c213846dc90388947ed85981e91c1bd2e566666613092ba not found: ID does not exist"
	Aug 26 10:53:30 addons-530639 kubelet[1221]: I0826 10:53:30.553149    1221 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6a5548-05f6-4c71-bae2-4dea4f538b78" path="/var/lib/kubelet/pods/7e6a5548-05f6-4c71-bae2-4dea4f538b78/volumes"
	Aug 26 10:53:34 addons-530639 kubelet[1221]: E0826 10:53:34.808256    1221 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669614807866630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:53:34 addons-530639 kubelet[1221]: E0826 10:53:34.808282    1221 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669614807866630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3] <==
	I0826 10:48:07.590913       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 10:48:07.646798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 10:48:07.646867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 10:48:07.718875       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 10:48:07.722902       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-530639_bdd99f87-1a21-4df7-8f25-a08507efa6ee!
	I0826 10:48:07.727950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70f5968a-b06f-4f29-9b7f-a8947c63df74", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-530639_bdd99f87-1a21-4df7-8f25-a08507efa6ee became leader
	I0826 10:48:07.894826       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-530639_bdd99f87-1a21-4df7-8f25-a08507efa6ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-530639 -n addons-530639
helpers_test.go:261: (dbg) Run:  kubectl --context addons-530639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (358.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.449145ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-jrwr8" [9e91fb1a-4430-468c-81e7-4017deff1c3c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004607401s
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (77.785347ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 2m34.073381229s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (70.401713ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 2m36.425057971s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (71.061311ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 2m41.826917396s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (72.203602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 2m45.596059954s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (65.940698ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 2m56.908078246s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (62.633925ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 3m9.445202813s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (64.71237ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 3m25.694942283s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (64.330657ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 3m47.321228954s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (66.195941ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 4m31.812987644s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (65.50509ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 5m1.935899657s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (65.911925ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 5m57.422150646s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (68.605753ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 6m33.83760122s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (65.697654ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 7m37.94643007s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-530639 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-530639 top pods -n kube-system: exit status 1 (67.463714ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-wkxkf, age: 8m23.508955643s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-530639 -n addons-530639
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-530639 logs -n 25: (1.280332492s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-210128                                                                     | download-only-210128 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-754943 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC |                     |
	|         | binary-mirror-754943                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44369                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-754943                                                                     | binary-mirror-754943 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC |                     |
	|         | addons-530639                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC |                     |
	|         | addons-530639                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-530639 --wait=true                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:49 UTC | 26 Aug 24 10:50 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | addons-530639                                                                               |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-530639 ssh cat                                                                       | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | /opt/local-path-provisioner/pvc-d9488103-fa6b-4b30-86cd-3775be1f0d86_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-530639 ip                                                                            | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | -p addons-530639                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | -p addons-530639                                                                            |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:51 UTC |
	|         | addons-530639                                                                               |                      |         |         |                     |                     |
	| addons  | addons-530639 addons                                                                        | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:50 UTC | 26 Aug 24 10:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-530639 addons                                                                        | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:51 UTC | 26 Aug 24 10:51 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-530639 ssh curl -s                                                                   | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-530639 ip                                                                            | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:53 UTC | 26 Aug 24 10:53 UTC |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:53 UTC | 26 Aug 24 10:53 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-530639 addons disable                                                                | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:53 UTC | 26 Aug 24 10:53 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-530639 addons                                                                        | addons-530639        | jenkins | v1.33.1 | 26 Aug 24 10:56 UTC | 26 Aug 24 10:56 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 10:47:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 10:47:13.891081  107298 out.go:345] Setting OutFile to fd 1 ...
	I0826 10:47:13.891202  107298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 10:47:13.891211  107298 out.go:358] Setting ErrFile to fd 2...
	I0826 10:47:13.891216  107298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 10:47:13.891445  107298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 10:47:13.892104  107298 out.go:352] Setting JSON to false
	I0826 10:47:13.893230  107298 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1775,"bootTime":1724667459,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 10:47:13.893295  107298 start.go:139] virtualization: kvm guest
	I0826 10:47:13.895574  107298 out.go:177] * [addons-530639] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 10:47:13.896870  107298 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 10:47:13.896896  107298 notify.go:220] Checking for updates...
	I0826 10:47:13.899513  107298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 10:47:13.900862  107298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 10:47:13.902276  107298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 10:47:13.903614  107298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 10:47:13.904817  107298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 10:47:13.906381  107298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 10:47:13.940293  107298 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 10:47:13.941848  107298 start.go:297] selected driver: kvm2
	I0826 10:47:13.941879  107298 start.go:901] validating driver "kvm2" against <nil>
	I0826 10:47:13.941894  107298 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 10:47:13.942638  107298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 10:47:13.942727  107298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 10:47:13.958770  107298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 10:47:13.958864  107298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 10:47:13.959094  107298 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 10:47:13.959168  107298 cni.go:84] Creating CNI manager for ""
	I0826 10:47:13.959181  107298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 10:47:13.959188  107298 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 10:47:13.959247  107298 start.go:340] cluster config:
	{Name:addons-530639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 10:47:13.959744  107298 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 10:47:13.961916  107298 out.go:177] * Starting "addons-530639" primary control-plane node in "addons-530639" cluster
	I0826 10:47:13.963150  107298 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 10:47:13.963209  107298 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 10:47:13.963223  107298 cache.go:56] Caching tarball of preloaded images
	I0826 10:47:13.963330  107298 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 10:47:13.963345  107298 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 10:47:13.963664  107298 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/config.json ...
	I0826 10:47:13.963692  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/config.json: {Name:mkafa60e91b41cce64f8251eb832bc8cf14e0b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:13.963899  107298 start.go:360] acquireMachinesLock for addons-530639: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 10:47:13.963968  107298 start.go:364] duration metric: took 48.469µs to acquireMachinesLock for "addons-530639"
	I0826 10:47:13.963997  107298 start.go:93] Provisioning new machine with config: &{Name:addons-530639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 10:47:13.964065  107298 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 10:47:13.965962  107298 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0826 10:47:13.966101  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:13.966131  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:13.981368  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0826 10:47:13.981911  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:13.982506  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:13.982549  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:13.982896  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:13.983045  107298 main.go:141] libmachine: (addons-530639) Calling .GetMachineName
	I0826 10:47:13.983204  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:13.983324  107298 start.go:159] libmachine.API.Create for "addons-530639" (driver="kvm2")
	I0826 10:47:13.983353  107298 client.go:168] LocalClient.Create starting
	I0826 10:47:13.983396  107298 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 10:47:14.061324  107298 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 10:47:14.343346  107298 main.go:141] libmachine: Running pre-create checks...
	I0826 10:47:14.343378  107298 main.go:141] libmachine: (addons-530639) Calling .PreCreateCheck
	I0826 10:47:14.343909  107298 main.go:141] libmachine: (addons-530639) Calling .GetConfigRaw
	I0826 10:47:14.344344  107298 main.go:141] libmachine: Creating machine...
	I0826 10:47:14.344363  107298 main.go:141] libmachine: (addons-530639) Calling .Create
	I0826 10:47:14.344496  107298 main.go:141] libmachine: (addons-530639) Creating KVM machine...
	I0826 10:47:14.345734  107298 main.go:141] libmachine: (addons-530639) DBG | found existing default KVM network
	I0826 10:47:14.346582  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.346412  107321 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f330}
	I0826 10:47:14.346611  107298 main.go:141] libmachine: (addons-530639) DBG | created network xml: 
	I0826 10:47:14.346627  107298 main.go:141] libmachine: (addons-530639) DBG | <network>
	I0826 10:47:14.346640  107298 main.go:141] libmachine: (addons-530639) DBG |   <name>mk-addons-530639</name>
	I0826 10:47:14.346685  107298 main.go:141] libmachine: (addons-530639) DBG |   <dns enable='no'/>
	I0826 10:47:14.346712  107298 main.go:141] libmachine: (addons-530639) DBG |   
	I0826 10:47:14.346767  107298 main.go:141] libmachine: (addons-530639) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0826 10:47:14.346804  107298 main.go:141] libmachine: (addons-530639) DBG |     <dhcp>
	I0826 10:47:14.346846  107298 main.go:141] libmachine: (addons-530639) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0826 10:47:14.346877  107298 main.go:141] libmachine: (addons-530639) DBG |     </dhcp>
	I0826 10:47:14.346892  107298 main.go:141] libmachine: (addons-530639) DBG |   </ip>
	I0826 10:47:14.346904  107298 main.go:141] libmachine: (addons-530639) DBG |   
	I0826 10:47:14.346922  107298 main.go:141] libmachine: (addons-530639) DBG | </network>
	I0826 10:47:14.346939  107298 main.go:141] libmachine: (addons-530639) DBG | 
	I0826 10:47:14.352246  107298 main.go:141] libmachine: (addons-530639) DBG | trying to create private KVM network mk-addons-530639 192.168.39.0/24...
	I0826 10:47:14.421042  107298 main.go:141] libmachine: (addons-530639) DBG | private KVM network mk-addons-530639 192.168.39.0/24 created
	I0826 10:47:14.421083  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.421018  107321 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 10:47:14.421107  107298 main.go:141] libmachine: (addons-530639) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639 ...
	I0826 10:47:14.421125  107298 main.go:141] libmachine: (addons-530639) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 10:47:14.421146  107298 main.go:141] libmachine: (addons-530639) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 10:47:14.686979  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.686792  107321 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa...
	I0826 10:47:14.850851  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.850658  107321 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/addons-530639.rawdisk...
	I0826 10:47:14.850909  107298 main.go:141] libmachine: (addons-530639) DBG | Writing magic tar header
	I0826 10:47:14.850949  107298 main.go:141] libmachine: (addons-530639) DBG | Writing SSH key tar header
	I0826 10:47:14.850985  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639 (perms=drwx------)
	I0826 10:47:14.850999  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:14.850786  107321 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639 ...
	I0826 10:47:14.851017  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639
	I0826 10:47:14.851031  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 10:47:14.851043  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 10:47:14.851054  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 10:47:14.851062  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 10:47:14.851071  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 10:47:14.851077  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 10:47:14.851087  107298 main.go:141] libmachine: (addons-530639) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 10:47:14.851092  107298 main.go:141] libmachine: (addons-530639) Creating domain...
	I0826 10:47:14.851106  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 10:47:14.851121  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 10:47:14.851134  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home/jenkins
	I0826 10:47:14.851147  107298 main.go:141] libmachine: (addons-530639) DBG | Checking permissions on dir: /home
	I0826 10:47:14.851153  107298 main.go:141] libmachine: (addons-530639) DBG | Skipping /home - not owner
	I0826 10:47:14.852204  107298 main.go:141] libmachine: (addons-530639) define libvirt domain using xml: 
	I0826 10:47:14.852237  107298 main.go:141] libmachine: (addons-530639) <domain type='kvm'>
	I0826 10:47:14.852269  107298 main.go:141] libmachine: (addons-530639)   <name>addons-530639</name>
	I0826 10:47:14.852293  107298 main.go:141] libmachine: (addons-530639)   <memory unit='MiB'>4000</memory>
	I0826 10:47:14.852300  107298 main.go:141] libmachine: (addons-530639)   <vcpu>2</vcpu>
	I0826 10:47:14.852306  107298 main.go:141] libmachine: (addons-530639)   <features>
	I0826 10:47:14.852335  107298 main.go:141] libmachine: (addons-530639)     <acpi/>
	I0826 10:47:14.852357  107298 main.go:141] libmachine: (addons-530639)     <apic/>
	I0826 10:47:14.852367  107298 main.go:141] libmachine: (addons-530639)     <pae/>
	I0826 10:47:14.852383  107298 main.go:141] libmachine: (addons-530639)     
	I0826 10:47:14.852395  107298 main.go:141] libmachine: (addons-530639)   </features>
	I0826 10:47:14.852407  107298 main.go:141] libmachine: (addons-530639)   <cpu mode='host-passthrough'>
	I0826 10:47:14.852419  107298 main.go:141] libmachine: (addons-530639)   
	I0826 10:47:14.852435  107298 main.go:141] libmachine: (addons-530639)   </cpu>
	I0826 10:47:14.852448  107298 main.go:141] libmachine: (addons-530639)   <os>
	I0826 10:47:14.852458  107298 main.go:141] libmachine: (addons-530639)     <type>hvm</type>
	I0826 10:47:14.852467  107298 main.go:141] libmachine: (addons-530639)     <boot dev='cdrom'/>
	I0826 10:47:14.852478  107298 main.go:141] libmachine: (addons-530639)     <boot dev='hd'/>
	I0826 10:47:14.852502  107298 main.go:141] libmachine: (addons-530639)     <bootmenu enable='no'/>
	I0826 10:47:14.852520  107298 main.go:141] libmachine: (addons-530639)   </os>
	I0826 10:47:14.852536  107298 main.go:141] libmachine: (addons-530639)   <devices>
	I0826 10:47:14.852553  107298 main.go:141] libmachine: (addons-530639)     <disk type='file' device='cdrom'>
	I0826 10:47:14.852568  107298 main.go:141] libmachine: (addons-530639)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/boot2docker.iso'/>
	I0826 10:47:14.852582  107298 main.go:141] libmachine: (addons-530639)       <target dev='hdc' bus='scsi'/>
	I0826 10:47:14.852591  107298 main.go:141] libmachine: (addons-530639)       <readonly/>
	I0826 10:47:14.852598  107298 main.go:141] libmachine: (addons-530639)     </disk>
	I0826 10:47:14.852612  107298 main.go:141] libmachine: (addons-530639)     <disk type='file' device='disk'>
	I0826 10:47:14.852624  107298 main.go:141] libmachine: (addons-530639)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 10:47:14.852640  107298 main.go:141] libmachine: (addons-530639)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/addons-530639.rawdisk'/>
	I0826 10:47:14.852647  107298 main.go:141] libmachine: (addons-530639)       <target dev='hda' bus='virtio'/>
	I0826 10:47:14.852658  107298 main.go:141] libmachine: (addons-530639)     </disk>
	I0826 10:47:14.852670  107298 main.go:141] libmachine: (addons-530639)     <interface type='network'>
	I0826 10:47:14.852689  107298 main.go:141] libmachine: (addons-530639)       <source network='mk-addons-530639'/>
	I0826 10:47:14.852706  107298 main.go:141] libmachine: (addons-530639)       <model type='virtio'/>
	I0826 10:47:14.852719  107298 main.go:141] libmachine: (addons-530639)     </interface>
	I0826 10:47:14.852729  107298 main.go:141] libmachine: (addons-530639)     <interface type='network'>
	I0826 10:47:14.852740  107298 main.go:141] libmachine: (addons-530639)       <source network='default'/>
	I0826 10:47:14.852751  107298 main.go:141] libmachine: (addons-530639)       <model type='virtio'/>
	I0826 10:47:14.852763  107298 main.go:141] libmachine: (addons-530639)     </interface>
	I0826 10:47:14.852776  107298 main.go:141] libmachine: (addons-530639)     <serial type='pty'>
	I0826 10:47:14.852790  107298 main.go:141] libmachine: (addons-530639)       <target port='0'/>
	I0826 10:47:14.852801  107298 main.go:141] libmachine: (addons-530639)     </serial>
	I0826 10:47:14.852812  107298 main.go:141] libmachine: (addons-530639)     <console type='pty'>
	I0826 10:47:14.852829  107298 main.go:141] libmachine: (addons-530639)       <target type='serial' port='0'/>
	I0826 10:47:14.852842  107298 main.go:141] libmachine: (addons-530639)     </console>
	I0826 10:47:14.852856  107298 main.go:141] libmachine: (addons-530639)     <rng model='virtio'>
	I0826 10:47:14.852872  107298 main.go:141] libmachine: (addons-530639)       <backend model='random'>/dev/random</backend>
	I0826 10:47:14.852892  107298 main.go:141] libmachine: (addons-530639)     </rng>
	I0826 10:47:14.852904  107298 main.go:141] libmachine: (addons-530639)     
	I0826 10:47:14.852910  107298 main.go:141] libmachine: (addons-530639)     
	I0826 10:47:14.852923  107298 main.go:141] libmachine: (addons-530639)   </devices>
	I0826 10:47:14.852930  107298 main.go:141] libmachine: (addons-530639) </domain>
	I0826 10:47:14.852940  107298 main.go:141] libmachine: (addons-530639) 
	I0826 10:47:14.859516  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:5c:32:ad in network default
	I0826 10:47:14.860100  107298 main.go:141] libmachine: (addons-530639) Ensuring networks are active...
	I0826 10:47:14.860127  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:14.860765  107298 main.go:141] libmachine: (addons-530639) Ensuring network default is active
	I0826 10:47:14.861008  107298 main.go:141] libmachine: (addons-530639) Ensuring network mk-addons-530639 is active
	I0826 10:47:14.862197  107298 main.go:141] libmachine: (addons-530639) Getting domain xml...
	I0826 10:47:14.862863  107298 main.go:141] libmachine: (addons-530639) Creating domain...
	I0826 10:47:16.339355  107298 main.go:141] libmachine: (addons-530639) Waiting to get IP...
	I0826 10:47:16.340175  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:16.340590  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:16.340636  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:16.340576  107321 retry.go:31] will retry after 281.515746ms: waiting for machine to come up
	I0826 10:47:16.624344  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:16.624992  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:16.625025  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:16.624939  107321 retry.go:31] will retry after 243.037698ms: waiting for machine to come up
	I0826 10:47:16.869416  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:16.869844  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:16.869872  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:16.869804  107321 retry.go:31] will retry after 443.620624ms: waiting for machine to come up
	I0826 10:47:17.315571  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:17.316085  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:17.316114  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:17.316031  107321 retry.go:31] will retry after 426.309028ms: waiting for machine to come up
	I0826 10:47:17.743692  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:17.744176  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:17.744200  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:17.744127  107321 retry.go:31] will retry after 677.222999ms: waiting for machine to come up
	I0826 10:47:18.422949  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:18.423371  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:18.423395  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:18.423329  107321 retry.go:31] will retry after 656.330104ms: waiting for machine to come up
	I0826 10:47:19.081181  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:19.081613  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:19.081645  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:19.081567  107321 retry.go:31] will retry after 945.440779ms: waiting for machine to come up
	I0826 10:47:20.028865  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:20.029347  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:20.029372  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:20.029312  107321 retry.go:31] will retry after 1.142316945s: waiting for machine to come up
	I0826 10:47:21.173621  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:21.174133  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:21.174160  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:21.174063  107321 retry.go:31] will retry after 1.700752905s: waiting for machine to come up
	I0826 10:47:22.876921  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:22.877374  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:22.877402  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:22.877333  107321 retry.go:31] will retry after 1.812613042s: waiting for machine to come up
	I0826 10:47:24.691557  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:24.692071  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:24.692100  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:24.692003  107321 retry.go:31] will retry after 2.40737115s: waiting for machine to come up
	I0826 10:47:27.102520  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:27.103020  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:27.103043  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:27.102969  107321 retry.go:31] will retry after 2.73995796s: waiting for machine to come up
	I0826 10:47:29.844860  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:29.845393  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:29.845420  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:29.845349  107321 retry.go:31] will retry after 2.95503839s: waiting for machine to come up
	I0826 10:47:32.803660  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:32.804236  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find current IP address of domain addons-530639 in network mk-addons-530639
	I0826 10:47:32.804269  107298 main.go:141] libmachine: (addons-530639) DBG | I0826 10:47:32.804152  107321 retry.go:31] will retry after 4.473711544s: waiting for machine to come up
	I0826 10:47:37.281799  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.282440  107298 main.go:141] libmachine: (addons-530639) Found IP for machine: 192.168.39.11
	I0826 10:47:37.282468  107298 main.go:141] libmachine: (addons-530639) Reserving static IP address...
	I0826 10:47:37.282482  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has current primary IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.282934  107298 main.go:141] libmachine: (addons-530639) DBG | unable to find host DHCP lease matching {name: "addons-530639", mac: "52:54:00:9e:aa:b3", ip: "192.168.39.11"} in network mk-addons-530639
	I0826 10:47:37.363213  107298 main.go:141] libmachine: (addons-530639) DBG | Getting to WaitForSSH function...
	I0826 10:47:37.363237  107298 main.go:141] libmachine: (addons-530639) Reserved static IP address: 192.168.39.11
	I0826 10:47:37.363249  107298 main.go:141] libmachine: (addons-530639) Waiting for SSH to be available...
	I0826 10:47:37.366127  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.366618  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.366647  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.366884  107298 main.go:141] libmachine: (addons-530639) DBG | Using SSH client type: external
	I0826 10:47:37.366922  107298 main.go:141] libmachine: (addons-530639) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa (-rw-------)
	I0826 10:47:37.366957  107298 main.go:141] libmachine: (addons-530639) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 10:47:37.366976  107298 main.go:141] libmachine: (addons-530639) DBG | About to run SSH command:
	I0826 10:47:37.366990  107298 main.go:141] libmachine: (addons-530639) DBG | exit 0
	I0826 10:47:37.503172  107298 main.go:141] libmachine: (addons-530639) DBG | SSH cmd err, output: <nil>: 
	I0826 10:47:37.503473  107298 main.go:141] libmachine: (addons-530639) KVM machine creation complete!
	I0826 10:47:37.503851  107298 main.go:141] libmachine: (addons-530639) Calling .GetConfigRaw
	I0826 10:47:37.504387  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:37.504599  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:37.504776  107298 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 10:47:37.504792  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:37.506161  107298 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 10:47:37.506176  107298 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 10:47:37.506181  107298 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 10:47:37.506187  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.508328  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.508646  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.508675  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.508814  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:37.509003  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.509148  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.509252  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:37.509439  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:37.509630  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:37.509640  107298 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 10:47:37.618194  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 10:47:37.618229  107298 main.go:141] libmachine: Detecting the provisioner...
	I0826 10:47:37.618244  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.621299  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.621674  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.621706  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.621828  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:37.622072  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.622231  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.622427  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:37.622594  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:37.622783  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:37.622797  107298 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 10:47:37.731832  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 10:47:37.731894  107298 main.go:141] libmachine: found compatible host: buildroot
	I0826 10:47:37.731904  107298 main.go:141] libmachine: Provisioning with buildroot...
	I0826 10:47:37.731919  107298 main.go:141] libmachine: (addons-530639) Calling .GetMachineName
	I0826 10:47:37.732182  107298 buildroot.go:166] provisioning hostname "addons-530639"
	I0826 10:47:37.732204  107298 main.go:141] libmachine: (addons-530639) Calling .GetMachineName
	I0826 10:47:37.732408  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.734947  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.735260  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.735292  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.735473  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:37.735684  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.735859  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.736002  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:37.736157  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:37.736342  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:37.736354  107298 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-530639 && echo "addons-530639" | sudo tee /etc/hostname
	I0826 10:47:37.856490  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-530639
	
	I0826 10:47:37.856517  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.859541  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.860082  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.860116  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.860323  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:37.860544  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.860764  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:37.860938  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:37.861125  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:37.861298  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:37.861313  107298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-530639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-530639/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-530639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 10:47:37.979646  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 10:47:37.979690  107298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 10:47:37.979714  107298 buildroot.go:174] setting up certificates
	I0826 10:47:37.979730  107298 provision.go:84] configureAuth start
	I0826 10:47:37.979744  107298 main.go:141] libmachine: (addons-530639) Calling .GetMachineName
	I0826 10:47:37.980119  107298 main.go:141] libmachine: (addons-530639) Calling .GetIP
	I0826 10:47:37.982722  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.983092  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.983121  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.983249  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:37.985190  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.985507  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:37.985536  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:37.985680  107298 provision.go:143] copyHostCerts
	I0826 10:47:37.985808  107298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 10:47:37.985961  107298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 10:47:37.986048  107298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 10:47:37.986119  107298 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.addons-530639 san=[127.0.0.1 192.168.39.11 addons-530639 localhost minikube]
	I0826 10:47:38.044501  107298 provision.go:177] copyRemoteCerts
	I0826 10:47:38.044567  107298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 10:47:38.044591  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.047475  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.047791  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.047820  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.048072  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.048291  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.048481  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.048631  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:38.133575  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 10:47:38.157412  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0826 10:47:38.181057  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 10:47:38.205113  107298 provision.go:87] duration metric: took 225.367648ms to configureAuth
	I0826 10:47:38.205151  107298 buildroot.go:189] setting minikube options for container-runtime
	I0826 10:47:38.205369  107298 config.go:182] Loaded profile config "addons-530639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 10:47:38.205477  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.208333  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.208704  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.208732  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.208919  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.209125  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.209299  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.209400  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.209617  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:38.209794  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:38.209809  107298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 10:47:38.482734  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 10:47:38.482760  107298 main.go:141] libmachine: Checking connection to Docker...
	I0826 10:47:38.482768  107298 main.go:141] libmachine: (addons-530639) Calling .GetURL
	I0826 10:47:38.484219  107298 main.go:141] libmachine: (addons-530639) DBG | Using libvirt version 6000000
	I0826 10:47:38.486655  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.486972  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.486998  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.487206  107298 main.go:141] libmachine: Docker is up and running!
	I0826 10:47:38.487225  107298 main.go:141] libmachine: Reticulating splines...
	I0826 10:47:38.487233  107298 client.go:171] duration metric: took 24.503868805s to LocalClient.Create
	I0826 10:47:38.487261  107298 start.go:167] duration metric: took 24.50393662s to libmachine.API.Create "addons-530639"
	I0826 10:47:38.487278  107298 start.go:293] postStartSetup for "addons-530639" (driver="kvm2")
	I0826 10:47:38.487291  107298 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 10:47:38.487308  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.487572  107298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 10:47:38.487608  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.489726  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.490014  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.490043  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.490237  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.490494  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.490672  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.490822  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:38.577010  107298 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 10:47:38.581059  107298 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 10:47:38.581090  107298 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 10:47:38.581162  107298 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 10:47:38.581191  107298 start.go:296] duration metric: took 93.904766ms for postStartSetup
	I0826 10:47:38.581226  107298 main.go:141] libmachine: (addons-530639) Calling .GetConfigRaw
	I0826 10:47:38.581839  107298 main.go:141] libmachine: (addons-530639) Calling .GetIP
	I0826 10:47:38.584692  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.585009  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.585042  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.585268  107298 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/config.json ...
	I0826 10:47:38.585470  107298 start.go:128] duration metric: took 24.621392499s to createHost
	I0826 10:47:38.585494  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.587646  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.587950  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.587989  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.588134  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.588335  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.588502  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.588635  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.588822  107298 main.go:141] libmachine: Using SSH client type: native
	I0826 10:47:38.588986  107298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0826 10:47:38.588996  107298 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 10:47:38.700358  107298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724669258.678008754
	
	I0826 10:47:38.700392  107298 fix.go:216] guest clock: 1724669258.678008754
	I0826 10:47:38.700403  107298 fix.go:229] Guest: 2024-08-26 10:47:38.678008754 +0000 UTC Remote: 2024-08-26 10:47:38.585482553 +0000 UTC m=+24.731896412 (delta=92.526201ms)
	I0826 10:47:38.700467  107298 fix.go:200] guest clock delta is within tolerance: 92.526201ms
	I0826 10:47:38.700480  107298 start.go:83] releasing machines lock for "addons-530639", held for 24.736496664s
	I0826 10:47:38.700518  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.700853  107298 main.go:141] libmachine: (addons-530639) Calling .GetIP
	I0826 10:47:38.703640  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.703870  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.703909  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.704049  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.704723  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.704946  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:38.705033  107298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 10:47:38.705110  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.705165  107298 ssh_runner.go:195] Run: cat /version.json
	I0826 10:47:38.705186  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:38.708220  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.708255  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.708628  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.708660  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:38.708691  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.708729  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:38.708863  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.709019  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:38.709103  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.709185  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:38.709254  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.709320  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:38.709375  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:38.709473  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:38.826014  107298 ssh_runner.go:195] Run: systemctl --version
	I0826 10:47:38.832460  107298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 10:47:38.992806  107298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 10:47:38.998471  107298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 10:47:38.998541  107298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 10:47:39.014505  107298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 10:47:39.014545  107298 start.go:495] detecting cgroup driver to use...
	I0826 10:47:39.014620  107298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 10:47:39.031420  107298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 10:47:39.046003  107298 docker.go:217] disabling cri-docker service (if available) ...
	I0826 10:47:39.046072  107298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 10:47:39.060496  107298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 10:47:39.074552  107298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 10:47:39.192843  107298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 10:47:39.330617  107298 docker.go:233] disabling docker service ...
	I0826 10:47:39.330701  107298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 10:47:39.344635  107298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 10:47:39.357823  107298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 10:47:39.498024  107298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 10:47:39.635067  107298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 10:47:39.648519  107298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 10:47:39.666429  107298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 10:47:39.666498  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.676992  107298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 10:47:39.677063  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.687666  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.698328  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.708783  107298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 10:47:39.719674  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.730334  107298 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.748155  107298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 10:47:39.758549  107298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 10:47:39.767989  107298 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 10:47:39.768064  107298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 10:47:39.780334  107298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 10:47:39.790344  107298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 10:47:39.911965  107298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 10:47:40.050897  107298 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 10:47:40.051029  107298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 10:47:40.055751  107298 start.go:563] Will wait 60s for crictl version
	I0826 10:47:40.055824  107298 ssh_runner.go:195] Run: which crictl
	I0826 10:47:40.059511  107298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 10:47:40.098328  107298 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 10:47:40.098452  107298 ssh_runner.go:195] Run: crio --version
	I0826 10:47:40.130254  107298 ssh_runner.go:195] Run: crio --version
	I0826 10:47:40.159919  107298 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 10:47:40.161645  107298 main.go:141] libmachine: (addons-530639) Calling .GetIP
	I0826 10:47:40.164398  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:40.164710  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:40.164740  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:40.164999  107298 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 10:47:40.169201  107298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 10:47:40.181655  107298 kubeadm.go:883] updating cluster {Name:addons-530639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 10:47:40.181787  107298 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 10:47:40.181854  107298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 10:47:40.213812  107298 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 10:47:40.213899  107298 ssh_runner.go:195] Run: which lz4
	I0826 10:47:40.217589  107298 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 10:47:40.221614  107298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 10:47:40.221663  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 10:47:41.434431  107298 crio.go:462] duration metric: took 1.216879825s to copy over tarball
	I0826 10:47:41.434510  107298 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 10:47:43.720590  107298 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286015502s)
	I0826 10:47:43.720626  107298 crio.go:469] duration metric: took 2.286162048s to extract the tarball
	I0826 10:47:43.720635  107298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 10:47:43.757053  107298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 10:47:43.805221  107298 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 10:47:43.805256  107298 cache_images.go:84] Images are preloaded, skipping loading
	I0826 10:47:43.805265  107298 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.31.0 crio true true} ...
	I0826 10:47:43.805370  107298 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-530639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 10:47:43.805453  107298 ssh_runner.go:195] Run: crio config
	I0826 10:47:43.854319  107298 cni.go:84] Creating CNI manager for ""
	I0826 10:47:43.854342  107298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 10:47:43.854352  107298 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 10:47:43.854378  107298 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-530639 NodeName:addons-530639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 10:47:43.854539  107298 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-530639"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 10:47:43.854625  107298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 10:47:43.864633  107298 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 10:47:43.864706  107298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 10:47:43.874020  107298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0826 10:47:43.893729  107298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 10:47:43.912136  107298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0826 10:47:43.930704  107298 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0826 10:47:43.934543  107298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 10:47:43.946492  107298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 10:47:44.066769  107298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 10:47:44.083219  107298 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639 for IP: 192.168.39.11
	I0826 10:47:44.083254  107298 certs.go:194] generating shared ca certs ...
	I0826 10:47:44.083278  107298 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.083469  107298 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 10:47:44.317559  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt ...
	I0826 10:47:44.317596  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt: {Name:mk528fb032b1b203659bc7401a1f3339f9cb42ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.317787  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key ...
	I0826 10:47:44.317798  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key: {Name:mk4bc8d0deb4ba0b612b6025cf4860247a955bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.317880  107298 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 10:47:44.703309  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt ...
	I0826 10:47:44.703343  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt: {Name:mk1b2a7cf4acdf32adf1087f9ce8c82681815beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.703516  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key ...
	I0826 10:47:44.703527  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key: {Name:mkc22cf5578b106a539c82ed4fa8827886c75fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.703598  107298 certs.go:256] generating profile certs ...
	I0826 10:47:44.703665  107298 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.key
	I0826 10:47:44.703688  107298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt with IP's: []
	I0826 10:47:44.944510  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt ...
	I0826 10:47:44.944545  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: {Name:mk5d1f6fa9bb983f8038422980e3ca85392492c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.944723  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.key ...
	I0826 10:47:44.944736  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.key: {Name:mkce47dadd34f8bae607c80a4f1b0f0c86e63785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:44.944807  107298 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key.18822754
	I0826 10:47:44.944822  107298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt.18822754 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I0826 10:47:45.159568  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt.18822754 ...
	I0826 10:47:45.159600  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt.18822754: {Name:mk4ea42c643206796cbe3966cc77eecdfd68e79b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:45.159765  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key.18822754 ...
	I0826 10:47:45.159779  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key.18822754: {Name:mk5ca7443b8a31961552bdff2b9da9a94eb373bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:45.159848  107298 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt.18822754 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt
	I0826 10:47:45.159947  107298 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key.18822754 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key
	I0826 10:47:45.159992  107298 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.key
	I0826 10:47:45.160010  107298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.crt with IP's: []
	I0826 10:47:45.253413  107298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.crt ...
	I0826 10:47:45.253447  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.crt: {Name:mke4af5277de083767543982254016a55df6bcd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:45.253609  107298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.key ...
	I0826 10:47:45.253621  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.key: {Name:mk769f3d1ad841791efadfe6cfcaa93a94069403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:45.253845  107298 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 10:47:45.253883  107298 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 10:47:45.253906  107298 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 10:47:45.253932  107298 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 10:47:45.254524  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 10:47:45.286584  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 10:47:45.311045  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 10:47:45.337133  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 10:47:45.361748  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0826 10:47:45.387421  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 10:47:45.412750  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 10:47:45.437978  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 10:47:45.462180  107298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 10:47:45.486806  107298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 10:47:45.503413  107298 ssh_runner.go:195] Run: openssl version
	I0826 10:47:45.509191  107298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 10:47:45.520387  107298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 10:47:45.525349  107298 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 10:47:45.525450  107298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 10:47:45.531582  107298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 10:47:45.543172  107298 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 10:47:45.547872  107298 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 10:47:45.547945  107298 kubeadm.go:392] StartCluster: {Name:addons-530639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-530639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 10:47:45.548042  107298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 10:47:45.548122  107298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 10:47:45.586788  107298 cri.go:89] found id: ""
	I0826 10:47:45.586893  107298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 10:47:45.597479  107298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 10:47:45.607935  107298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 10:47:45.618174  107298 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 10:47:45.618206  107298 kubeadm.go:157] found existing configuration files:
	
	I0826 10:47:45.618255  107298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 10:47:45.628020  107298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 10:47:45.628116  107298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 10:47:45.638435  107298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 10:47:45.648280  107298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 10:47:45.648364  107298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 10:47:45.660893  107298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 10:47:45.670383  107298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 10:47:45.670484  107298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 10:47:45.684763  107298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 10:47:45.696352  107298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 10:47:45.696441  107298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 10:47:45.710879  107298 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 10:47:45.761454  107298 kubeadm.go:310] W0826 10:47:45.746260     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 10:47:45.762362  107298 kubeadm.go:310] W0826 10:47:45.747299     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 10:47:45.874243  107298 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 10:47:55.215839  107298 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 10:47:55.215941  107298 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 10:47:55.216064  107298 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 10:47:55.216160  107298 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 10:47:55.216274  107298 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 10:47:55.216451  107298 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 10:47:55.218290  107298 out.go:235]   - Generating certificates and keys ...
	I0826 10:47:55.218374  107298 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 10:47:55.218453  107298 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 10:47:55.218561  107298 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0826 10:47:55.218645  107298 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0826 10:47:55.218728  107298 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0826 10:47:55.218796  107298 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0826 10:47:55.218891  107298 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0826 10:47:55.219045  107298 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-530639 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0826 10:47:55.219111  107298 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0826 10:47:55.219222  107298 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-530639 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0826 10:47:55.219304  107298 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0826 10:47:55.219401  107298 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0826 10:47:55.219465  107298 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0826 10:47:55.219513  107298 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 10:47:55.219560  107298 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 10:47:55.219613  107298 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 10:47:55.219669  107298 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 10:47:55.219727  107298 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 10:47:55.219781  107298 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 10:47:55.219855  107298 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 10:47:55.219915  107298 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 10:47:55.221576  107298 out.go:235]   - Booting up control plane ...
	I0826 10:47:55.221665  107298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 10:47:55.221734  107298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 10:47:55.221827  107298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 10:47:55.221941  107298 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 10:47:55.222045  107298 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 10:47:55.222109  107298 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 10:47:55.222241  107298 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 10:47:55.222403  107298 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 10:47:55.222502  107298 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.237668ms
	I0826 10:47:55.222623  107298 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 10:47:55.222707  107298 kubeadm.go:310] [api-check] The API server is healthy after 5.503465218s
	I0826 10:47:55.222827  107298 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 10:47:55.222989  107298 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 10:47:55.223081  107298 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 10:47:55.223254  107298 kubeadm.go:310] [mark-control-plane] Marking the node addons-530639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 10:47:55.223335  107298 kubeadm.go:310] [bootstrap-token] Using token: 7wdj76.nlpbotovotxm4wlx
	I0826 10:47:55.224812  107298 out.go:235]   - Configuring RBAC rules ...
	I0826 10:47:55.224930  107298 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 10:47:55.225031  107298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 10:47:55.225156  107298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 10:47:55.225296  107298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 10:47:55.225434  107298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 10:47:55.225541  107298 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 10:47:55.225672  107298 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 10:47:55.225731  107298 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 10:47:55.225804  107298 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 10:47:55.225825  107298 kubeadm.go:310] 
	I0826 10:47:55.225910  107298 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 10:47:55.225923  107298 kubeadm.go:310] 
	I0826 10:47:55.225999  107298 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 10:47:55.226005  107298 kubeadm.go:310] 
	I0826 10:47:55.226026  107298 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 10:47:55.226080  107298 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 10:47:55.226124  107298 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 10:47:55.226129  107298 kubeadm.go:310] 
	I0826 10:47:55.226177  107298 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 10:47:55.226183  107298 kubeadm.go:310] 
	I0826 10:47:55.226224  107298 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 10:47:55.226230  107298 kubeadm.go:310] 
	I0826 10:47:55.226301  107298 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 10:47:55.226410  107298 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 10:47:55.226486  107298 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 10:47:55.226492  107298 kubeadm.go:310] 
	I0826 10:47:55.226559  107298 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 10:47:55.226622  107298 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 10:47:55.226629  107298 kubeadm.go:310] 
	I0826 10:47:55.226706  107298 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7wdj76.nlpbotovotxm4wlx \
	I0826 10:47:55.226805  107298 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 10:47:55.226859  107298 kubeadm.go:310] 	--control-plane 
	I0826 10:47:55.226870  107298 kubeadm.go:310] 
	I0826 10:47:55.226939  107298 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 10:47:55.226946  107298 kubeadm.go:310] 
	I0826 10:47:55.227015  107298 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7wdj76.nlpbotovotxm4wlx \
	I0826 10:47:55.227125  107298 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 10:47:55.227140  107298 cni.go:84] Creating CNI manager for ""
	I0826 10:47:55.227147  107298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 10:47:55.228673  107298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 10:47:55.229923  107298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 10:47:55.241477  107298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 10:47:55.264645  107298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 10:47:55.264734  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:55.264770  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-530639 minikube.k8s.io/updated_at=2024_08_26T10_47_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=addons-530639 minikube.k8s.io/primary=true
	I0826 10:47:55.286015  107298 ops.go:34] apiserver oom_adj: -16
	I0826 10:47:55.417683  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:55.917840  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:56.418559  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:56.918088  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:57.418085  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:57.917870  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:58.418358  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:58.918532  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:59.418415  107298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 10:47:59.504262  107298 kubeadm.go:1113] duration metric: took 4.239603265s to wait for elevateKubeSystemPrivileges
	I0826 10:47:59.504305  107298 kubeadm.go:394] duration metric: took 13.956365982s to StartCluster
	I0826 10:47:59.504326  107298 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:59.504479  107298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 10:47:59.504869  107298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:47:59.505077  107298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0826 10:47:59.505127  107298 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 10:47:59.505191  107298 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0826 10:47:59.505317  107298 addons.go:69] Setting yakd=true in profile "addons-530639"
	I0826 10:47:59.505315  107298 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-530639"
	I0826 10:47:59.505360  107298 addons.go:234] Setting addon yakd=true in "addons-530639"
	I0826 10:47:59.505348  107298 addons.go:69] Setting cloud-spanner=true in profile "addons-530639"
	I0826 10:47:59.505360  107298 addons.go:69] Setting metrics-server=true in profile "addons-530639"
	I0826 10:47:59.505394  107298 addons.go:69] Setting registry=true in profile "addons-530639"
	I0826 10:47:59.505402  107298 addons.go:234] Setting addon cloud-spanner=true in "addons-530639"
	I0826 10:47:59.505406  107298 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-530639"
	I0826 10:47:59.505422  107298 addons.go:69] Setting default-storageclass=true in profile "addons-530639"
	I0826 10:47:59.505442  107298 addons.go:69] Setting ingress-dns=true in profile "addons-530639"
	I0826 10:47:59.505447  107298 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-530639"
	I0826 10:47:59.505453  107298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-530639"
	I0826 10:47:59.505458  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505467  107298 addons.go:69] Setting inspektor-gadget=true in profile "addons-530639"
	I0826 10:47:59.505483  107298 addons.go:234] Setting addon inspektor-gadget=true in "addons-530639"
	I0826 10:47:59.505519  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505530  107298 addons.go:69] Setting gcp-auth=true in profile "addons-530639"
	I0826 10:47:59.505548  107298 mustload.go:65] Loading cluster: addons-530639
	I0826 10:47:59.505740  107298 config.go:182] Loaded profile config "addons-530639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 10:47:59.505398  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505888  107298 addons.go:69] Setting volcano=true in profile "addons-530639"
	I0826 10:47:59.505901  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.505918  107298 addons.go:234] Setting addon volcano=true in "addons-530639"
	I0826 10:47:59.505931  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.505940  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505944  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.506000  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506064  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506087  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506182  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506200  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506218  107298 addons.go:69] Setting volumesnapshots=true in profile "addons-530639"
	I0826 10:47:59.506249  107298 addons.go:234] Setting addon volumesnapshots=true in "addons-530639"
	I0826 10:47:59.506276  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.506292  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506326  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506457  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.505403  107298 addons.go:69] Setting helm-tiller=true in profile "addons-530639"
	I0826 10:47:59.506488  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506495  107298 addons.go:234] Setting addon helm-tiller=true in "addons-530639"
	I0826 10:47:59.506521  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.506557  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506576  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.506644  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506665  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505460  107298 addons.go:234] Setting addon ingress-dns=true in "addons-530639"
	I0826 10:47:59.506809  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.506904  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.506928  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505382  107298 addons.go:69] Setting storage-provisioner=true in profile "addons-530639"
	I0826 10:47:59.507188  107298 addons.go:234] Setting addon storage-provisioner=true in "addons-530639"
	I0826 10:47:59.507216  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.507225  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.507247  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505424  107298 addons.go:234] Setting addon metrics-server=true in "addons-530639"
	I0826 10:47:59.507286  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505384  107298 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-530639"
	I0826 10:47:59.508162  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.508527  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.508546  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.505393  107298 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-530639"
	I0826 10:47:59.511059  107298 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-530639"
	I0826 10:47:59.511106  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.511490  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.511525  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.511674  107298 out.go:177] * Verifying Kubernetes components...
	I0826 10:47:59.505429  107298 addons.go:234] Setting addon registry=true in "addons-530639"
	I0826 10:47:59.511884  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.513175  107298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 10:47:59.505432  107298 addons.go:69] Setting ingress=true in profile "addons-530639"
	I0826 10:47:59.513335  107298 addons.go:234] Setting addon ingress=true in "addons-530639"
	I0826 10:47:59.513381  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.505367  107298 config.go:182] Loaded profile config "addons-530639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 10:47:59.527591  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35559
	I0826 10:47:59.527820  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0826 10:47:59.527947  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I0826 10:47:59.528365  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.528496  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.529049  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.529069  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.529094  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.529112  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.529449  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.529519  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0826 10:47:59.529547  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.529744  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.529822  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.530407  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.530450  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.530508  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.530702  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.530714  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.531139  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.531159  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.531231  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.531739  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.532438  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.532486  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.534681  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0826 10:47:59.536819  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0826 10:47:59.539188  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539245  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.539322  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539342  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.539191  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539390  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.539449  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539467  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.539674  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.539719  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.540206  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.540333  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.540859  107298 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-530639"
	I0826 10:47:59.540912  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.541284  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.541328  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.542036  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.542063  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.542220  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.542241  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.542287  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0826 10:47:59.542478  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.545603  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.545743  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.545801  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.548072  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.548099  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.548787  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.548833  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.549073  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.549703  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.549750  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.550316  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41087
	I0826 10:47:59.550963  107298 addons.go:234] Setting addon default-storageclass=true in "addons-530639"
	I0826 10:47:59.551006  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.551343  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.551375  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.555357  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.555904  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.555925  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.556254  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.556424  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.558641  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:47:59.559245  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.559306  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.569207  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0826 10:47:59.569987  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.570655  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.570686  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.571168  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.571794  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.571851  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.579058  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0826 10:47:59.579656  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.580205  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.580230  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.580650  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.581346  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.581394  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.581638  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45099
	I0826 10:47:59.582191  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.582822  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.582854  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.583250  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.583497  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.585043  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0826 10:47:59.585537  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.586181  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.586555  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:47:59.586574  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:47:59.586921  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:47:59.586957  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:47:59.586965  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:47:59.586973  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:47:59.586983  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:47:59.587188  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35411
	I0826 10:47:59.587645  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:47:59.588130  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.588149  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.588458  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.588751  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.588900  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.588915  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.589495  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.589545  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.589621  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I0826 10:47:59.590089  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.590672  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.590691  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.591332  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.592135  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.592172  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.592642  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0826 10:47:59.593406  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.594081  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.594097  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.594481  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.594658  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45139
	I0826 10:47:59.594886  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.595167  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.595192  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.595358  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.595713  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.595731  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.596161  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.597120  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.597146  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.597405  107298 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0826 10:47:59.597430  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	W0826 10:47:59.597529  107298 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0826 10:47:59.598626  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I0826 10:47:59.599085  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.599558  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.600254  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.600281  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.600442  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.600779  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.601340  107298 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0826 10:47:59.602341  107298 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0826 10:47:59.603138  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40923
	I0826 10:47:59.603214  107298 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0826 10:47:59.603236  107298 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0826 10:47:59.603260  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.603740  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.604040  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0826 10:47:59.604063  107298 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0826 10:47:59.604094  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.605839  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.605868  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.606390  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38091
	I0826 10:47:59.606768  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.606789  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.607271  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.607691  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.607939  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.608155  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.608508  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.608983  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.609024  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.609265  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.609289  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.609311  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.609323  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0826 10:47:59.609717  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.609851  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.610178  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.610262  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.610276  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.610281  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.610298  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.610422  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.610557  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.610912  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.611252  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.611467  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.611485  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.611541  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.611754  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.612003  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.612822  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.613112  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.615230  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39051
	I0826 10:47:59.615443  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.615507  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0826 10:47:59.616155  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.616690  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.616716  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.617262  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.617748  107298 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0826 10:47:59.618095  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.618116  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.618118  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.618291  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.618643  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.618810  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0826 10:47:59.619348  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.619870  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.619887  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.619905  107298 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0826 10:47:59.619923  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0826 10:47:59.619943  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.620248  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.620394  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.621284  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.623296  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.623311  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.625097  107298 out.go:177]   - Using image docker.io/busybox:stable
	I0826 10:47:59.625097  107298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 10:47:59.625543  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.625983  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.626022  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.626060  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I0826 10:47:59.626630  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.626691  107298 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 10:47:59.626706  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 10:47:59.626726  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.627497  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.627516  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.627996  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.628204  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0826 10:47:59.628278  107298 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0826 10:47:59.628393  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.628581  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.628726  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.628779  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.628824  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.628899  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.629337  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.629917  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.629934  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.630417  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.630650  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.632120  107298 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0826 10:47:59.632145  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0826 10:47:59.632167  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.632336  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0826 10:47:59.632821  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.633617  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.634931  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.634952  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.635027  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.635353  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.635388  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.635646  107298 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0826 10:47:59.635673  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.635937  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.636234  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.636309  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.636356  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.636376  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.636597  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.636793  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.636969  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.637121  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.637184  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:47:59.637232  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:47:59.637453  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.637654  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.638493  107298 out.go:177]   - Using image docker.io/registry:2.8.3
	I0826 10:47:59.640091  107298 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0826 10:47:59.640110  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0826 10:47:59.640128  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.643798  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.644319  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.644367  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.644600  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.644701  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I0826 10:47:59.645080  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.645283  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.645359  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.645627  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.646306  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0826 10:47:59.646986  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.647653  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.647671  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.647733  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0826 10:47:59.648006  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.648020  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.648075  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.648482  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.648534  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.649050  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.649103  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.650185  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.650207  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.650709  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.651031  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.651216  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.652057  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.653499  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.653550  107298 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0826 10:47:59.653669  107298 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0826 10:47:59.654908  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I0826 10:47:59.654978  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0826 10:47:59.654990  107298 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 10:47:59.655046  107298 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 10:47:59.655067  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.655111  107298 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0826 10:47:59.655125  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0826 10:47:59.655144  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.656085  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I0826 10:47:59.656228  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0826 10:47:59.656243  107298 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0826 10:47:59.656252  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0826 10:47:59.656262  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.656270  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.656939  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.657405  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.657503  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.657524  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.657935  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.657955  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.658074  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.658095  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.658108  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.658450  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.658453  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.658932  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.659323  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.659366  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.660321  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.660421  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.660984  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.661020  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.661179  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.661339  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.661450  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.661550  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.662004  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.662140  107298 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0826 10:47:59.662341  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.663091  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.663127  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.663657  107298 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0826 10:47:59.663677  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0826 10:47:59.663702  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0826 10:47:59.663767  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.663831  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.663864  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.663987  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.664179  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.664306  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.664319  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.665023  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.665100  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.665666  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.665881  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.665995  107298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0826 10:47:59.666014  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0826 10:47:59.666147  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.666299  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.666713  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.667239  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.667333  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.667383  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.667578  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.667740  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.667935  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.668510  107298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0826 10:47:59.668564  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0826 10:47:59.669127  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0826 10:47:59.669256  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46645
	I0826 10:47:59.669636  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.669750  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:47:59.670212  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.670226  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:47:59.670234  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.670238  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:47:59.670528  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.670578  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:47:59.670765  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.670771  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:47:59.670963  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0826 10:47:59.670978  107298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0826 10:47:59.672196  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0826 10:47:59.672461  107298 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0826 10:47:59.672484  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0826 10:47:59.672505  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.672552  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.673680  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:47:59.674048  107298 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 10:47:59.674059  107298 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 10:47:59.674073  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.674242  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0826 10:47:59.674247  107298 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0826 10:47:59.675899  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0826 10:47:59.675979  107298 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0826 10:47:59.676005  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0826 10:47:59.676024  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.676166  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.676760  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.676781  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.677054  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.677277  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.677738  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.677923  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.678360  107298 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0826 10:47:59.678559  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.679218  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.679245  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.679338  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.679418  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.679602  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.679626  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.679651  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.679730  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0826 10:47:59.679747  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0826 10:47:59.679782  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:47:59.679808  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.679843  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.679998  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.680018  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.680139  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.680266  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:47:59.682389  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.682927  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:47:59.682957  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:47:59.683154  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:47:59.683361  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:47:59.683596  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:47:59.683759  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	W0826 10:47:59.718139  107298 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49514->192.168.39.11:22: read: connection reset by peer
	I0826 10:47:59.718193  107298 retry.go:31] will retry after 323.018998ms: ssh: handshake failed: read tcp 192.168.39.1:49514->192.168.39.11:22: read: connection reset by peer
	W0826 10:47:59.718263  107298 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49518->192.168.39.11:22: read: connection reset by peer
	I0826 10:47:59.718273  107298 retry.go:31] will retry after 352.73951ms: ssh: handshake failed: read tcp 192.168.39.1:49518->192.168.39.11:22: read: connection reset by peer
	I0826 10:47:59.882220  107298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 10:47:59.882259  107298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0826 10:47:59.912702  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0826 10:47:59.951500  107298 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0826 10:47:59.951534  107298 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0826 10:47:59.964048  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0826 10:47:59.975067  107298 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0826 10:47:59.975108  107298 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0826 10:47:59.978819  107298 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 10:47:59.978870  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0826 10:47:59.988578  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 10:47:59.989630  107298 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0826 10:47:59.989653  107298 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0826 10:48:00.019383  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 10:48:00.036394  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0826 10:48:00.063218  107298 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0826 10:48:00.063254  107298 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0826 10:48:00.065708  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0826 10:48:00.071292  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0826 10:48:00.071325  107298 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0826 10:48:00.102916  107298 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0826 10:48:00.102950  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0826 10:48:00.131548  107298 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0826 10:48:00.131588  107298 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0826 10:48:00.134646  107298 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0826 10:48:00.134670  107298 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0826 10:48:00.153844  107298 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 10:48:00.153868  107298 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 10:48:00.263952  107298 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0826 10:48:00.263983  107298 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0826 10:48:00.264366  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0826 10:48:00.264394  107298 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0826 10:48:00.279725  107298 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0826 10:48:00.279754  107298 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0826 10:48:00.322403  107298 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0826 10:48:00.322434  107298 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0826 10:48:00.348642  107298 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 10:48:00.348669  107298 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 10:48:00.365055  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0826 10:48:00.431617  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0826 10:48:00.431664  107298 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0826 10:48:00.434977  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0826 10:48:00.435010  107298 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0826 10:48:00.452827  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0826 10:48:00.481437  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 10:48:00.494021  107298 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0826 10:48:00.494063  107298 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0826 10:48:00.538185  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0826 10:48:00.548517  107298 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0826 10:48:00.548554  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0826 10:48:00.575978  107298 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0826 10:48:00.576003  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0826 10:48:00.618283  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0826 10:48:00.618319  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0826 10:48:00.702315  107298 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0826 10:48:00.702347  107298 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0826 10:48:00.835544  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0826 10:48:00.899939  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0826 10:48:00.966513  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0826 10:48:00.966554  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0826 10:48:01.045598  107298 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0826 10:48:01.045640  107298 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0826 10:48:01.219693  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0826 10:48:01.219724  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0826 10:48:01.306204  107298 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0826 10:48:01.306233  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0826 10:48:01.372437  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0826 10:48:01.372469  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0826 10:48:01.440430  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0826 10:48:01.610601  107298 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0826 10:48:01.610639  107298 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0826 10:48:01.758128  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0826 10:48:01.758155  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0826 10:48:01.840983  107298 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.958690198s)
	I0826 10:48:01.841015  107298 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0826 10:48:01.841028  107298 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.958771105s)
	I0826 10:48:01.841141  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.928405448s)
	I0826 10:48:01.841201  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:01.841215  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:01.841651  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:01.841690  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:01.841707  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:01.841764  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:01.841785  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:01.842082  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:01.842114  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:01.842139  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:01.842108  107298 node_ready.go:35] waiting up to 6m0s for node "addons-530639" to be "Ready" ...
	I0826 10:48:01.847028  107298 node_ready.go:49] node "addons-530639" has status "Ready":"True"
	I0826 10:48:01.847059  107298 node_ready.go:38] duration metric: took 4.835462ms for node "addons-530639" to be "Ready" ...
	I0826 10:48:01.847073  107298 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 10:48:01.871750  107298 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:02.151015  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0826 10:48:02.151052  107298 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0826 10:48:02.347148  107298 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-530639" context rescaled to 1 replicas
	I0826 10:48:02.446275  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0826 10:48:02.446314  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0826 10:48:02.797322  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0826 10:48:02.797346  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0826 10:48:02.899791  107298 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0826 10:48:02.899825  107298 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0826 10:48:03.164016  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0826 10:48:03.881628  107298 pod_ready.go:103] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:05.918274  107298 pod_ready.go:103] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:06.634167  107298 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0826 10:48:06.634225  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:48:06.637305  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:48:06.637774  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:48:06.637817  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:48:06.637998  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:48:06.638256  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:48:06.638399  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:48:06.638531  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:48:07.149028  107298 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0826 10:48:07.357403  107298 addons.go:234] Setting addon gcp-auth=true in "addons-530639"
	I0826 10:48:07.357469  107298 host.go:66] Checking if "addons-530639" exists ...
	I0826 10:48:07.357867  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:48:07.357906  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:48:07.374389  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
	I0826 10:48:07.374976  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:48:07.375555  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:48:07.375585  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:48:07.376011  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:48:07.376515  107298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 10:48:07.376542  107298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 10:48:07.393088  107298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0826 10:48:07.393617  107298 main.go:141] libmachine: () Calling .GetVersion
	I0826 10:48:07.394153  107298 main.go:141] libmachine: Using API Version  1
	I0826 10:48:07.394182  107298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 10:48:07.394550  107298 main.go:141] libmachine: () Calling .GetMachineName
	I0826 10:48:07.394803  107298 main.go:141] libmachine: (addons-530639) Calling .GetState
	I0826 10:48:07.396415  107298 main.go:141] libmachine: (addons-530639) Calling .DriverName
	I0826 10:48:07.396698  107298 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0826 10:48:07.396723  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHHostname
	I0826 10:48:07.399468  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:48:07.399895  107298 main.go:141] libmachine: (addons-530639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:aa:b3", ip: ""} in network mk-addons-530639: {Iface:virbr1 ExpiryTime:2024-08-26 11:47:28 +0000 UTC Type:0 Mac:52:54:00:9e:aa:b3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-530639 Clientid:01:52:54:00:9e:aa:b3}
	I0826 10:48:07.399925  107298 main.go:141] libmachine: (addons-530639) DBG | domain addons-530639 has defined IP address 192.168.39.11 and MAC address 52:54:00:9e:aa:b3 in network mk-addons-530639
	I0826 10:48:07.400104  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHPort
	I0826 10:48:07.400298  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHKeyPath
	I0826 10:48:07.400494  107298 main.go:141] libmachine: (addons-530639) Calling .GetSSHUsername
	I0826 10:48:07.400725  107298 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/addons-530639/id_rsa Username:docker}
	I0826 10:48:08.319296  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.355208577s)
	I0826 10:48:08.319355  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319368  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319417  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.33080134s)
	I0826 10:48:08.319468  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319483  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319511  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.283088334s)
	I0826 10:48:08.319483  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.300065129s)
	I0826 10:48:08.319557  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319538  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319606  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319630  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.253892816s)
	I0826 10:48:08.319665  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319573  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319676  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319722  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.954632116s)
	I0826 10:48:08.319750  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319759  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319762  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.866878458s)
	I0826 10:48:08.319785  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319794  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319861  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.838395324s)
	I0826 10:48:08.319883  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319893  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319938  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.781712131s)
	I0826 10:48:08.319968  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.319977  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.319986  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.484403191s)
	I0826 10:48:08.320005  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320014  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320130  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.420145979s)
	W0826 10:48:08.320164  107298 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0826 10:48:08.320208  107298 retry.go:31] will retry after 306.411063ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0826 10:48:08.320235  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.87975339s)
	I0826 10:48:08.320263  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320275  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320590  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320597  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320625  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320630  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320645  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320646  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320655  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320657  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320665  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320645  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320679  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320689  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320665  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320711  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320713  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320801  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320803  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320840  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320848  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.320860  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320874  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320909  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320924  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.320936  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320951  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.320962  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320989  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321005  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321010  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321040  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.321058  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.321262  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321306  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321313  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321472  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321495  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321502  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321522  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.321530  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.320822  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321582  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321591  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.321598  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.321652  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321675  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321682  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.321890  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.321914  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.321920  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.320929  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.323192  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.323207  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.323632  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.323668  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.323676  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325398  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325452  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.325465  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325476  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.325483  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.325565  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325570  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.325580  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325590  107298 addons.go:475] Verifying addon metrics-server=true in "addons-530639"
	I0826 10:48:08.325615  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325643  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.325651  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325924  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325942  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.325965  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.325971  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.325981  107298 addons.go:475] Verifying addon registry=true in "addons-530639"
	I0826 10:48:08.326345  107298 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-530639 service yakd-dashboard -n yakd-dashboard
	
	I0826 10:48:08.326822  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.326878  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.326889  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.327094  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.327109  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.327118  107298 addons.go:475] Verifying addon ingress=true in "addons-530639"
	I0826 10:48:08.327260  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.327438  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.327283  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.327866  107298 out.go:177] * Verifying registry addon...
	I0826 10:48:08.328329  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.328349  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.328365  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.328374  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.328643  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.328666  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.328703  107298 out.go:177] * Verifying ingress addon...
	I0826 10:48:08.330314  107298 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0826 10:48:08.330740  107298 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0826 10:48:08.388224  107298 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0826 10:48:08.388251  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:08.389037  107298 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0826 10:48:08.389068  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:08.394603  107298 pod_ready.go:103] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:08.398504  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.398583  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.398918  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.398937  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	W0826 10:48:08.399054  107298 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0826 10:48:08.419743  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:08.419767  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:08.420068  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:08.420092  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:08.420109  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:08.626883  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0826 10:48:09.092135  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:09.092699  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:09.353858  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:09.353893  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:09.865671  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:09.865904  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:09.893284  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.729205013s)
	I0826 10:48:09.893357  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:09.893376  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:09.893409  107298 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.496687736s)
	I0826 10:48:09.893542  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.266617617s)
	I0826 10:48:09.893647  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:09.893665  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:09.893748  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:09.893805  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:09.893812  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:09.893878  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:09.893906  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:09.893929  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:09.893945  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:09.893949  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:09.893959  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:09.893967  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:09.894194  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:09.894231  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:09.894240  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:09.895321  107298 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0826 10:48:09.895964  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:09.895985  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:09.896002  107298 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-530639"
	I0826 10:48:09.898209  107298 out.go:177] * Verifying csi-hostpath-driver addon...
	I0826 10:48:09.898211  107298 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0826 10:48:09.900395  107298 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0826 10:48:09.900422  107298 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0826 10:48:09.901247  107298 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0826 10:48:09.942978  107298 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0826 10:48:09.943008  107298 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0826 10:48:09.961820  107298 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0826 10:48:09.961858  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:10.053431  107298 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0826 10:48:10.053463  107298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0826 10:48:10.139071  107298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0826 10:48:10.334087  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:10.337443  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:10.406869  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:10.837000  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:10.837256  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:10.880507  107298 pod_ready.go:103] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:10.908643  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:11.342604  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:11.343921  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:11.424179  107298 pod_ready.go:98] pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:48:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.11 HostIPs:[{IP:192.168.39.
11}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-26 10:47:59 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-26 10:48:04 +0000 UTC,FinishedAt:2024-08-26 10:48:09 +0000 UTC,ContainerID:cri-o://643641a2b69a7f6850a2b135f36ee7d9889dcc21f4248701b2c792b98b143e1a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://643641a2b69a7f6850a2b135f36ee7d9889dcc21f4248701b2c792b98b143e1a Started:0xc001442720 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0016d6780} {Name:kube-api-access-ltfps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0016d6790}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0826 10:48:11.424210  107298 pod_ready.go:82] duration metric: took 9.552411873s for pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace to be "Ready" ...
	E0826 10:48:11.424223  107298 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-dfqlw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:48:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-26 10:47:59 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.11 HostIPs:[{IP:192.168.39.11}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-26 10:47:59 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-26 10:48:04 +0000 UTC,FinishedAt:2024-08-26 10:48:09 +0000 UTC,ContainerID:cri-o://643641a2b69a7f6850a2b135f36ee7d9889dcc21f4248701b2c792b98b143e1a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://643641a2b69a7f6850a2b135f36ee7d9889dcc21f4248701b2c792b98b143e1a Started:0xc001442720 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0016d6780} {Name:kube-api-access-ltfps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0016d6790}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0826 10:48:11.424236  107298 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wkxkf" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.437882  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:11.545349  107298 pod_ready.go:93] pod "coredns-6f6b679f8f-wkxkf" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.545379  107298 pod_ready.go:82] duration metric: took 121.134263ms for pod "coredns-6f6b679f8f-wkxkf" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.545392  107298 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.616934  107298 pod_ready.go:93] pod "etcd-addons-530639" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.616965  107298 pod_ready.go:82] duration metric: took 71.565501ms for pod "etcd-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.616980  107298 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.693045  107298 pod_ready.go:93] pod "kube-apiserver-addons-530639" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.693088  107298 pod_ready.go:82] duration metric: took 76.097584ms for pod "kube-apiserver-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.693104  107298 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.808330  107298 pod_ready.go:93] pod "kube-controller-manager-addons-530639" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.808357  107298 pod_ready.go:82] duration metric: took 115.243832ms for pod "kube-controller-manager-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.808367  107298 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qbghq" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.820337  107298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.681215995s)
	I0826 10:48:11.820407  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:11.820424  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:11.820780  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:11.820810  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:11.820822  107298 main.go:141] libmachine: Making call to close driver server
	I0826 10:48:11.820831  107298 main.go:141] libmachine: (addons-530639) Calling .Close
	I0826 10:48:11.820839  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:11.821111  107298 main.go:141] libmachine: Successfully made call to close driver server
	I0826 10:48:11.821129  107298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 10:48:11.821131  107298 main.go:141] libmachine: (addons-530639) DBG | Closing plugin on server side
	I0826 10:48:11.823210  107298 addons.go:475] Verifying addon gcp-auth=true in "addons-530639"
	I0826 10:48:11.825091  107298 out.go:177] * Verifying gcp-auth addon...
	I0826 10:48:11.827490  107298 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0826 10:48:11.867702  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:11.868106  107298 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0826 10:48:11.868124  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:11.868697  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:11.889037  107298 pod_ready.go:93] pod "kube-proxy-qbghq" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:11.889068  107298 pod_ready.go:82] duration metric: took 80.693517ms for pod "kube-proxy-qbghq" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.889083  107298 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:11.959343  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:12.176190  107298 pod_ready.go:93] pod "kube-scheduler-addons-530639" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:12.176217  107298 pod_ready.go:82] duration metric: took 287.12697ms for pod "kube-scheduler-addons-530639" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:12.176228  107298 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:12.332305  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:12.342001  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:12.345308  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:12.434480  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:12.831162  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:12.834442  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:12.834994  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:12.905740  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:13.331759  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:13.334042  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:13.335511  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:13.417010  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:13.831696  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:13.835233  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:13.835834  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:13.907247  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:14.186773  107298 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:14.331374  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:14.334097  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:14.334350  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:14.408092  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:14.831845  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:14.834307  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:14.835201  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:14.906503  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:15.332036  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:15.335123  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:15.335571  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:15.406770  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:15.832221  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:15.835051  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:15.835242  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:15.906007  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:16.334000  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:16.334342  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:16.336529  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:16.407082  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:16.683303  107298 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace has status "Ready":"False"
	I0826 10:48:16.831490  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:16.840182  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:16.841523  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:16.906660  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:17.331505  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:17.334051  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:17.334466  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:17.405468  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:17.830915  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:17.834241  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:17.834664  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:17.906351  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:18.332632  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:18.334590  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:18.336678  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:18.406414  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:18.833706  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:18.834979  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:18.835501  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:18.905777  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:19.183056  107298 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace has status "Ready":"True"
	I0826 10:48:19.183084  107298 pod_ready.go:82] duration metric: took 7.006849533s for pod "nvidia-device-plugin-daemonset-dwxvz" in "kube-system" namespace to be "Ready" ...
	I0826 10:48:19.183091  107298 pod_ready.go:39] duration metric: took 17.336002509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 10:48:19.183107  107298 api_server.go:52] waiting for apiserver process to appear ...
	I0826 10:48:19.183160  107298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 10:48:19.199882  107298 api_server.go:72] duration metric: took 19.694713746s to wait for apiserver process to appear ...
	I0826 10:48:19.199918  107298 api_server.go:88] waiting for apiserver healthz status ...
	I0826 10:48:19.199940  107298 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0826 10:48:19.204286  107298 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I0826 10:48:19.205213  107298 api_server.go:141] control plane version: v1.31.0
	I0826 10:48:19.205263  107298 api_server.go:131] duration metric: took 5.336161ms to wait for apiserver health ...
	I0826 10:48:19.205274  107298 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 10:48:19.215199  107298 system_pods.go:59] 18 kube-system pods found
	I0826 10:48:19.215237  107298 system_pods.go:61] "coredns-6f6b679f8f-wkxkf" [22b66a68-1ed8-47c0-98fb-681f0fc08eca] Running
	I0826 10:48:19.215247  107298 system_pods.go:61] "csi-hostpath-attacher-0" [5b08e2d1-6ecc-4500-82c7-1163b840f4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0826 10:48:19.215258  107298 system_pods.go:61] "csi-hostpath-resizer-0" [b3b0e195-ef58-49e3-9bc3-197ea739961f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0826 10:48:19.215269  107298 system_pods.go:61] "csi-hostpathplugin-dqt92" [e5c11c5c-dc5c-4e44-90bd-7fd30cff1ebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0826 10:48:19.215276  107298 system_pods.go:61] "etcd-addons-530639" [083a7cd1-96ca-428a-b150-66940ba38303] Running
	I0826 10:48:19.215287  107298 system_pods.go:61] "kube-apiserver-addons-530639" [33036b21-fd01-4dc2-a607-621408bba9ab] Running
	I0826 10:48:19.215294  107298 system_pods.go:61] "kube-controller-manager-addons-530639" [82b4411c-6afc-4b37-a8b4-c5c859cf55d4] Running
	I0826 10:48:19.215305  107298 system_pods.go:61] "kube-ingress-dns-minikube" [4388a77f-5011-4640-bee8-9dabf8fa9b50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0826 10:48:19.215309  107298 system_pods.go:61] "kube-proxy-qbghq" [041a740f-019e-4b5a-b615-018af363dbb1] Running
	I0826 10:48:19.215314  107298 system_pods.go:61] "kube-scheduler-addons-530639" [f4364302-4a0a-450f-90b4-b0938fc5ee65] Running
	I0826 10:48:19.215320  107298 system_pods.go:61] "metrics-server-8988944d9-jrwr8" [9e91fb1a-4430-468c-81e7-4017deff1c3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 10:48:19.215324  107298 system_pods.go:61] "nvidia-device-plugin-daemonset-dwxvz" [ec199bca-5011-4285-b91f-ad5994dfe228] Running
	I0826 10:48:19.215330  107298 system_pods.go:61] "registry-6fb4cdfc84-22wjc" [32d6b7ea-5422-4b4d-a7fe-209b1fae6bb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0826 10:48:19.215337  107298 system_pods.go:61] "registry-proxy-vmr7f" [b4617f2b-ddb1-47b0-baf2-2418c37ffd7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0826 10:48:19.215345  107298 system_pods.go:61] "snapshot-controller-56fcc65765-4x5ld" [c3ed019a-c3de-4dea-bcd6-48b9d755cbb2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0826 10:48:19.215354  107298 system_pods.go:61] "snapshot-controller-56fcc65765-whvlf" [4b5b9866-9d35-4282-8de5-c1f17deb402d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0826 10:48:19.215360  107298 system_pods.go:61] "storage-provisioner" [1241b73f-229a-41df-830b-18467fa1c581] Running
	I0826 10:48:19.215371  107298 system_pods.go:61] "tiller-deploy-b48cc5f79-rr874" [a5ad8512-3f72-43be-a53c-23106bcd3367] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0826 10:48:19.215384  107298 system_pods.go:74] duration metric: took 10.102347ms to wait for pod list to return data ...
	I0826 10:48:19.215399  107298 default_sa.go:34] waiting for default service account to be created ...
	I0826 10:48:19.218267  107298 default_sa.go:45] found service account: "default"
	I0826 10:48:19.218295  107298 default_sa.go:55] duration metric: took 2.886012ms for default service account to be created ...
	I0826 10:48:19.218304  107298 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 10:48:19.227556  107298 system_pods.go:86] 18 kube-system pods found
	I0826 10:48:19.227591  107298 system_pods.go:89] "coredns-6f6b679f8f-wkxkf" [22b66a68-1ed8-47c0-98fb-681f0fc08eca] Running
	I0826 10:48:19.227601  107298 system_pods.go:89] "csi-hostpath-attacher-0" [5b08e2d1-6ecc-4500-82c7-1163b840f4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0826 10:48:19.227607  107298 system_pods.go:89] "csi-hostpath-resizer-0" [b3b0e195-ef58-49e3-9bc3-197ea739961f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0826 10:48:19.227616  107298 system_pods.go:89] "csi-hostpathplugin-dqt92" [e5c11c5c-dc5c-4e44-90bd-7fd30cff1ebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0826 10:48:19.227621  107298 system_pods.go:89] "etcd-addons-530639" [083a7cd1-96ca-428a-b150-66940ba38303] Running
	I0826 10:48:19.227625  107298 system_pods.go:89] "kube-apiserver-addons-530639" [33036b21-fd01-4dc2-a607-621408bba9ab] Running
	I0826 10:48:19.227629  107298 system_pods.go:89] "kube-controller-manager-addons-530639" [82b4411c-6afc-4b37-a8b4-c5c859cf55d4] Running
	I0826 10:48:19.227638  107298 system_pods.go:89] "kube-ingress-dns-minikube" [4388a77f-5011-4640-bee8-9dabf8fa9b50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0826 10:48:19.227644  107298 system_pods.go:89] "kube-proxy-qbghq" [041a740f-019e-4b5a-b615-018af363dbb1] Running
	I0826 10:48:19.227649  107298 system_pods.go:89] "kube-scheduler-addons-530639" [f4364302-4a0a-450f-90b4-b0938fc5ee65] Running
	I0826 10:48:19.227659  107298 system_pods.go:89] "metrics-server-8988944d9-jrwr8" [9e91fb1a-4430-468c-81e7-4017deff1c3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 10:48:19.227665  107298 system_pods.go:89] "nvidia-device-plugin-daemonset-dwxvz" [ec199bca-5011-4285-b91f-ad5994dfe228] Running
	I0826 10:48:19.227671  107298 system_pods.go:89] "registry-6fb4cdfc84-22wjc" [32d6b7ea-5422-4b4d-a7fe-209b1fae6bb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0826 10:48:19.227680  107298 system_pods.go:89] "registry-proxy-vmr7f" [b4617f2b-ddb1-47b0-baf2-2418c37ffd7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0826 10:48:19.227690  107298 system_pods.go:89] "snapshot-controller-56fcc65765-4x5ld" [c3ed019a-c3de-4dea-bcd6-48b9d755cbb2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0826 10:48:19.227698  107298 system_pods.go:89] "snapshot-controller-56fcc65765-whvlf" [4b5b9866-9d35-4282-8de5-c1f17deb402d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0826 10:48:19.227703  107298 system_pods.go:89] "storage-provisioner" [1241b73f-229a-41df-830b-18467fa1c581] Running
	I0826 10:48:19.227708  107298 system_pods.go:89] "tiller-deploy-b48cc5f79-rr874" [a5ad8512-3f72-43be-a53c-23106bcd3367] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0826 10:48:19.227717  107298 system_pods.go:126] duration metric: took 9.407266ms to wait for k8s-apps to be running ...
	I0826 10:48:19.227727  107298 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 10:48:19.227783  107298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 10:48:19.243008  107298 system_svc.go:56] duration metric: took 15.266444ms WaitForService to wait for kubelet
	I0826 10:48:19.243047  107298 kubeadm.go:582] duration metric: took 19.73788638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 10:48:19.243071  107298 node_conditions.go:102] verifying NodePressure condition ...
	I0826 10:48:19.247386  107298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 10:48:19.247415  107298 node_conditions.go:123] node cpu capacity is 2
	I0826 10:48:19.247443  107298 node_conditions.go:105] duration metric: took 4.367236ms to run NodePressure ...
	I0826 10:48:19.247457  107298 start.go:241] waiting for startup goroutines ...
	I0826 10:48:19.331566  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:19.333731  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:19.334177  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:19.406009  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:19.832849  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:19.834601  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:19.836920  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:19.906989  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:20.332290  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:20.336609  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:20.336643  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:20.407021  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:20.831647  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:20.838542  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:20.838643  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:20.906521  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:21.333182  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:21.340154  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:21.343100  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:21.407728  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:21.831300  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:21.834920  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:21.835963  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:21.906708  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:22.331436  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:22.335245  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:22.335398  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:22.406195  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:22.832391  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:22.839473  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:22.840457  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:22.909564  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:23.331747  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:23.334013  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:23.334412  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:23.542973  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:23.832582  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:23.835493  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:23.836458  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:23.906109  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:24.334607  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:24.335860  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:24.336623  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:24.406530  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:24.831594  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:24.835189  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:24.835447  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:24.905521  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:25.331010  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:25.334500  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:25.335146  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:25.408118  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:25.830999  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:25.833198  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:25.834096  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:25.905505  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:26.331315  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:26.334482  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:26.334528  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:26.406446  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:26.832908  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:26.835441  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:26.835824  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:26.907010  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:27.331966  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:27.334906  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:27.335676  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:27.406060  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:27.835148  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:27.835309  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:27.835577  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:27.906413  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:28.331377  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:28.334554  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:28.334795  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:28.406321  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:28.830975  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:28.833256  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:28.835018  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:28.908116  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:29.330919  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:29.333488  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:29.337487  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:29.407949  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:29.831891  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:29.833834  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:29.834519  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:29.906330  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:30.331438  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:30.334410  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:30.334717  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:30.406938  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:30.831988  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:30.835620  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:30.835810  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:30.906442  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:31.331285  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:31.334226  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:31.334764  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:31.407026  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:31.832038  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:31.834371  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:31.835041  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:31.906040  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:32.332116  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:32.334119  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:32.334636  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:32.405977  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:32.832481  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:32.834548  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:32.835607  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:32.906369  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:33.352800  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:33.352894  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:33.353401  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:33.579162  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:33.833470  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:33.835496  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:33.835897  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:33.906136  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:34.331914  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:34.334939  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:34.335588  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:34.406083  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:34.832691  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:34.834540  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:34.834921  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:34.906507  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:35.332280  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:35.336685  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:35.337560  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:35.407020  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:35.834097  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:35.835019  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:35.835149  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:35.907118  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:36.334675  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:36.335095  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:36.336063  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:36.405863  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:36.832544  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:36.836553  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:36.836907  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:36.905773  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:37.332209  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:37.335078  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:37.336218  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:37.406551  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:37.831261  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:37.834263  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:37.835205  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:37.906266  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:38.333248  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:38.335889  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:38.336524  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:38.406425  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:38.831645  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:38.834895  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:38.835516  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:38.933600  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:39.331190  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:39.333954  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:39.334038  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:39.405831  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:39.831033  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:39.833325  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:39.834815  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:39.906182  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:40.331364  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:40.336530  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:40.336762  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:40.407104  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:40.830494  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:40.835359  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:40.835442  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:40.906611  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:41.331043  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:41.333726  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:41.334373  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:41.405957  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:41.832912  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:41.834862  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:41.835280  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:41.905386  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:42.331694  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:42.337206  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:42.337255  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:42.406196  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:42.831800  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:42.834289  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:42.834564  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:42.906171  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:43.331678  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:43.334792  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:43.335246  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:43.436081  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:43.831975  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:43.838692  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:43.839149  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:43.912196  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:44.332034  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:44.334323  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:44.334471  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:44.406371  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:44.831495  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:44.834868  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:44.834894  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:44.906333  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:45.332097  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:45.333506  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:45.335371  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:45.406703  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:45.831229  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:45.834340  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:45.835335  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:45.907188  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:46.331562  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:46.335190  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:46.335488  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:46.405824  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:46.832167  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:46.834605  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:46.835706  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:46.906344  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:47.331017  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:47.335717  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:47.336106  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:47.406324  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:47.831330  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:47.833529  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:47.835156  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:47.905508  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:48.331490  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:48.334321  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:48.334546  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:48.406284  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:48.961937  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:48.962506  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:48.962904  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:48.963410  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:49.331613  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:49.333937  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:49.335175  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:49.405825  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:49.831915  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:49.835112  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0826 10:48:49.836530  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:49.906098  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:50.331538  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:50.337336  107298 kapi.go:107] duration metric: took 42.007017907s to wait for kubernetes.io/minikube-addons=registry ...
	I0826 10:48:50.337393  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:50.407088  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:50.831547  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:50.834419  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:50.906138  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:51.331047  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:51.334074  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:51.405579  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:51.831569  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:51.834699  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:51.907515  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:52.333622  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:52.335791  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:52.435626  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:52.831742  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:52.835502  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:52.906320  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:53.332208  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:53.336841  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:53.409561  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:53.831432  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:53.838429  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:53.905534  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:54.331843  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:54.334717  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:54.406417  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:55.041771  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:55.042675  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:55.043144  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:55.333373  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:55.337752  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:55.437386  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:55.831558  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:55.834545  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:55.906131  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:56.331120  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:56.338380  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:56.407623  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:56.833014  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:56.836180  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:56.906644  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:57.331073  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:57.334613  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:57.406453  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:57.831317  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:57.834375  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:57.905944  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:58.332386  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:58.335066  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:58.406240  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:58.831794  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:58.834297  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:58.906103  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:59.331533  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:59.334204  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:59.406869  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:48:59.830823  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:48:59.834516  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:48:59.905763  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:00.331601  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:00.334387  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:00.406106  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:00.831255  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:00.835117  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:00.907114  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:01.331810  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:01.334394  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:01.406240  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:01.831783  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:01.834506  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:01.906434  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:02.332188  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:02.334724  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:02.407694  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:02.831323  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:02.834559  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:02.905920  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:03.332053  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:03.335037  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:03.406463  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:03.831525  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:03.833994  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:03.906684  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:04.331225  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:04.335071  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:04.406294  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:04.831025  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:04.834335  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:04.906188  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:05.331106  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:05.334802  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:05.407108  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:05.852185  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:05.944317  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:05.944413  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:06.330901  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:06.334237  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:06.405455  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:06.831241  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:06.834339  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:06.905505  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:07.330968  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:07.334410  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:07.406723  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:07.833019  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:07.835745  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:07.907398  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:08.331670  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:08.334918  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:08.406045  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:08.842419  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:08.844686  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:08.906570  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:09.332380  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:09.335241  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:09.405818  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:09.832263  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:09.835137  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:09.934532  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:10.332036  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:10.334412  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:10.406308  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:10.831939  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:10.836360  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:11.396630  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:11.396833  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:11.397531  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:11.410122  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:11.831120  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:11.834765  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:11.906853  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:12.331701  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:12.335277  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:12.405751  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:12.831126  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:12.834746  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:12.906237  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:13.332004  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:13.437080  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:13.437250  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:13.832326  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:13.834284  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:13.921527  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:14.333782  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:14.336731  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:14.407254  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:14.833692  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:14.839153  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:14.906402  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:15.333236  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:15.336121  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:15.405620  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:15.831992  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:15.835724  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:15.912050  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:16.332519  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:16.342398  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:16.407498  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:16.831775  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:16.836671  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:16.906376  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:17.331446  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:17.334923  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:17.405595  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:17.831029  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:17.834357  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:17.913955  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:18.501260  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:18.501489  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:18.501850  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:18.832007  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:18.834526  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:18.907468  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:19.331943  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:19.336074  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:19.408218  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:19.831848  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:19.836042  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:19.906280  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:20.331231  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:20.334876  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:20.406341  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:20.878426  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:20.878825  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:20.998874  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:21.331203  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:21.334220  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:21.405454  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:21.831320  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:21.834889  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:21.906598  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:22.331299  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:22.333953  107298 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0826 10:49:22.405546  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:22.837014  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:22.837818  107298 kapi.go:107] duration metric: took 1m14.507073831s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0826 10:49:22.906673  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:23.330962  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:23.406112  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:23.832414  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:23.933956  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:24.332355  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:24.407375  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:24.832379  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:24.906132  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:25.330592  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:25.406926  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:25.832522  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:25.906726  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:26.331918  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:26.406042  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:26.832414  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:26.907628  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:27.331597  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:27.405677  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:27.831504  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:27.910799  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:28.332815  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0826 10:49:28.435040  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:28.835299  107298 kapi.go:107] duration metric: took 1m17.007809993s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0826 10:49:28.836778  107298 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-530639 cluster.
	I0826 10:49:28.837955  107298 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0826 10:49:28.839307  107298 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0826 10:49:28.935418  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:29.405952  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:29.906949  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:30.405914  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:30.907336  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:31.406364  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:31.906023  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:32.406317  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:32.906873  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:33.407125  107298 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0826 10:49:33.906462  107298 kapi.go:107] duration metric: took 1m24.005211968s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0826 10:49:33.908602  107298 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, yakd, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0826 10:49:33.909939  107298 addons.go:510] duration metric: took 1m34.404745333s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget yakd helm-tiller storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0826 10:49:33.909983  107298 start.go:246] waiting for cluster config update ...
	I0826 10:49:33.910008  107298 start.go:255] writing updated cluster config ...
	I0826 10:49:33.910295  107298 ssh_runner.go:195] Run: rm -f paused
	I0826 10:49:33.969647  107298 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 10:49:33.971684  107298 out.go:177] * Done! kubectl is now configured to use "addons-530639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.893552171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669783893520247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffbed45d-5a89-4224-8c23-d58883044fa2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.894133402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a508caa-4cdd-4937-8670-30bb9a66a1e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.894239375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a508caa-4cdd-4937-8670-30bb9a66a1e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.894483554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:133475e45b395768f9a86b5d9d29ef8b4f94c30cbbd9ece7d5b0af9ea2a075fb,PodSandboxId:7c61481d4c53c9da981fc68c1dff0f056b9b817cf9d5b0242f755608bf72e722,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724669607306896321,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s42mb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e580bcd-e483-4db3-b57b-59290cd40f30,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdc81d3d54b853f3c029bf35234844d6b28e3f0dd7518737d6b932f80bb514b,PodSandboxId:ea9b34094dd38b38d39f907813892e32fda00a15dc90e8345a2e89f5b55168dc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724669466087738467,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3ca2a5e47b8b4aa27ee321a368430538f1e7a10cc745764285f325ef61f326,PodSandboxId:96985e5c2c9fb2cd56c7d456d8b81875deba2f4cb158c03bb669d118fcbdcad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724669377641360105,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 569a0aa8-0b7f-48e8-9
857-7b842118128d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca,PodSandboxId:578fa2369817bed956940489ec2e905738179bd65a6654708b5e6dd8445b5080,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724669332230613970,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-jrwr8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 9e91fb1a-4430-468c-81e7-4017deff1c3c,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3,PodSandboxId:da592225294749f79c393e55503908fe7866a419b4c2d82c21be80ee7c822a92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724669286788747172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1241b73f-229a-41df-830b-18467fa1c581,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d,PodSandboxId:87f28fd3acff778798fac5002ccd0ae6057fb42566ec116d781c9f8d399d547f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724669284077620158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-wkxkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b66a68-1ed8-47c0-98fb-681f0fc08eca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413,PodSandboxId:0ae7cb019e3d91eb094ed590f8d46da77e059e58fdcdec68c62efc505dfcf173,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724669281751395254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbghq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041a740f-019e-4b5a-b615-018af363dbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5,PodSandboxId:d6aaa3a2860076119e487c1765f43180b7f146f7d06ca9b66057e0614995b19e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724669269105052961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc5c75d55afd25cbf49f8c9c1515e02,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9,PodSandboxId:4bb95cb3cc16ea4224d1fbfd35500ce12bc9a1be9d36ef3b1ee5f50b75a6b5b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724669269080485351,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903aeb6456cc069c62974b42d8088a75,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283,PodSandboxId:324e6d2c78486f5ac780a357871fdcdbd206f3e28c1c4a3d2fffb8120a14e964,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNIN
G,CreatedAt:1724669269084405064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353ccf56fead8c783c0da330f049c6f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff,PodSandboxId:532ee159b1e2e85e95238bebbb451bf905edde72871b281799df73cc610dfa5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17246
69268878481058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73c5d9ce1def0f6be0c13d9d869a4e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a508caa-4cdd-4937-8670-30bb9a66a1e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.930705663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=715171e6-66cc-41a4-8a89-5b36ecdad3c9 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.930787020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=715171e6-66cc-41a4-8a89-5b36ecdad3c9 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.932066703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bca0fca-9769-439f-858e-7a320851eae0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.933607927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669783933575282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bca0fca-9769-439f-858e-7a320851eae0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.934215701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b81c8058-a926-4eee-80b8-63c737416a51 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.934277369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b81c8058-a926-4eee-80b8-63c737416a51 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.934554118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:133475e45b395768f9a86b5d9d29ef8b4f94c30cbbd9ece7d5b0af9ea2a075fb,PodSandboxId:7c61481d4c53c9da981fc68c1dff0f056b9b817cf9d5b0242f755608bf72e722,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724669607306896321,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s42mb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e580bcd-e483-4db3-b57b-59290cd40f30,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdc81d3d54b853f3c029bf35234844d6b28e3f0dd7518737d6b932f80bb514b,PodSandboxId:ea9b34094dd38b38d39f907813892e32fda00a15dc90e8345a2e89f5b55168dc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724669466087738467,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3ca2a5e47b8b4aa27ee321a368430538f1e7a10cc745764285f325ef61f326,PodSandboxId:96985e5c2c9fb2cd56c7d456d8b81875deba2f4cb158c03bb669d118fcbdcad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724669377641360105,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 569a0aa8-0b7f-48e8-9
857-7b842118128d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca,PodSandboxId:578fa2369817bed956940489ec2e905738179bd65a6654708b5e6dd8445b5080,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724669332230613970,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-jrwr8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 9e91fb1a-4430-468c-81e7-4017deff1c3c,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3,PodSandboxId:da592225294749f79c393e55503908fe7866a419b4c2d82c21be80ee7c822a92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724669286788747172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1241b73f-229a-41df-830b-18467fa1c581,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d,PodSandboxId:87f28fd3acff778798fac5002ccd0ae6057fb42566ec116d781c9f8d399d547f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724669284077620158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-wkxkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b66a68-1ed8-47c0-98fb-681f0fc08eca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413,PodSandboxId:0ae7cb019e3d91eb094ed590f8d46da77e059e58fdcdec68c62efc505dfcf173,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724669281751395254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbghq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041a740f-019e-4b5a-b615-018af363dbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5,PodSandboxId:d6aaa3a2860076119e487c1765f43180b7f146f7d06ca9b66057e0614995b19e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724669269105052961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc5c75d55afd25cbf49f8c9c1515e02,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9,PodSandboxId:4bb95cb3cc16ea4224d1fbfd35500ce12bc9a1be9d36ef3b1ee5f50b75a6b5b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724669269080485351,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903aeb6456cc069c62974b42d8088a75,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283,PodSandboxId:324e6d2c78486f5ac780a357871fdcdbd206f3e28c1c4a3d2fffb8120a14e964,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNIN
G,CreatedAt:1724669269084405064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353ccf56fead8c783c0da330f049c6f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff,PodSandboxId:532ee159b1e2e85e95238bebbb451bf905edde72871b281799df73cc610dfa5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17246
69268878481058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73c5d9ce1def0f6be0c13d9d869a4e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b81c8058-a926-4eee-80b8-63c737416a51 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.977479543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70917799-e898-480c-934d-be867e8ecf90 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.977570530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70917799-e898-480c-934d-be867e8ecf90 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.978993327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6de3442-9adb-4af2-8654-d49eb2c4a1b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.980283764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669783980248113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6de3442-9adb-4af2-8654-d49eb2c4a1b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.980969613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e54f8b10-fdf0-4e75-a68e-bad23653898b name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.981035267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e54f8b10-fdf0-4e75-a68e-bad23653898b name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:23 addons-530639 crio[683]: time="2024-08-26 10:56:23.981442577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:133475e45b395768f9a86b5d9d29ef8b4f94c30cbbd9ece7d5b0af9ea2a075fb,PodSandboxId:7c61481d4c53c9da981fc68c1dff0f056b9b817cf9d5b0242f755608bf72e722,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724669607306896321,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s42mb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e580bcd-e483-4db3-b57b-59290cd40f30,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdc81d3d54b853f3c029bf35234844d6b28e3f0dd7518737d6b932f80bb514b,PodSandboxId:ea9b34094dd38b38d39f907813892e32fda00a15dc90e8345a2e89f5b55168dc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724669466087738467,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3ca2a5e47b8b4aa27ee321a368430538f1e7a10cc745764285f325ef61f326,PodSandboxId:96985e5c2c9fb2cd56c7d456d8b81875deba2f4cb158c03bb669d118fcbdcad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724669377641360105,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 569a0aa8-0b7f-48e8-9
857-7b842118128d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca,PodSandboxId:578fa2369817bed956940489ec2e905738179bd65a6654708b5e6dd8445b5080,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724669332230613970,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-jrwr8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 9e91fb1a-4430-468c-81e7-4017deff1c3c,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3,PodSandboxId:da592225294749f79c393e55503908fe7866a419b4c2d82c21be80ee7c822a92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724669286788747172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1241b73f-229a-41df-830b-18467fa1c581,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d,PodSandboxId:87f28fd3acff778798fac5002ccd0ae6057fb42566ec116d781c9f8d399d547f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724669284077620158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-wkxkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b66a68-1ed8-47c0-98fb-681f0fc08eca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413,PodSandboxId:0ae7cb019e3d91eb094ed590f8d46da77e059e58fdcdec68c62efc505dfcf173,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724669281751395254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbghq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041a740f-019e-4b5a-b615-018af363dbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5,PodSandboxId:d6aaa3a2860076119e487c1765f43180b7f146f7d06ca9b66057e0614995b19e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724669269105052961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc5c75d55afd25cbf49f8c9c1515e02,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9,PodSandboxId:4bb95cb3cc16ea4224d1fbfd35500ce12bc9a1be9d36ef3b1ee5f50b75a6b5b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724669269080485351,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903aeb6456cc069c62974b42d8088a75,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283,PodSandboxId:324e6d2c78486f5ac780a357871fdcdbd206f3e28c1c4a3d2fffb8120a14e964,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNIN
G,CreatedAt:1724669269084405064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353ccf56fead8c783c0da330f049c6f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff,PodSandboxId:532ee159b1e2e85e95238bebbb451bf905edde72871b281799df73cc610dfa5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17246
69268878481058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73c5d9ce1def0f6be0c13d9d869a4e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e54f8b10-fdf0-4e75-a68e-bad23653898b name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:24 addons-530639 crio[683]: time="2024-08-26 10:56:24.015113536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=053dde8e-7dcb-4b2e-aea1-c4ebfaeed950 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:56:24 addons-530639 crio[683]: time="2024-08-26 10:56:24.015229839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=053dde8e-7dcb-4b2e-aea1-c4ebfaeed950 name=/runtime.v1.RuntimeService/Version
	Aug 26 10:56:24 addons-530639 crio[683]: time="2024-08-26 10:56:24.016559610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=999485d9-e958-4e0b-b61d-e740b8b15c94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:56:24 addons-530639 crio[683]: time="2024-08-26 10:56:24.017924297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669784017869864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=999485d9-e958-4e0b-b61d-e740b8b15c94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 10:56:24 addons-530639 crio[683]: time="2024-08-26 10:56:24.018688481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05a5536f-4a03-43ff-a54b-63e347c3aa4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:24 addons-530639 crio[683]: time="2024-08-26 10:56:24.018755422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05a5536f-4a03-43ff-a54b-63e347c3aa4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 10:56:24 addons-530639 crio[683]: time="2024-08-26 10:56:24.019009018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:133475e45b395768f9a86b5d9d29ef8b4f94c30cbbd9ece7d5b0af9ea2a075fb,PodSandboxId:7c61481d4c53c9da981fc68c1dff0f056b9b817cf9d5b0242f755608bf72e722,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724669607306896321,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s42mb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e580bcd-e483-4db3-b57b-59290cd40f30,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdc81d3d54b853f3c029bf35234844d6b28e3f0dd7518737d6b932f80bb514b,PodSandboxId:ea9b34094dd38b38d39f907813892e32fda00a15dc90e8345a2e89f5b55168dc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724669466087738467,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e8dbed1-3f57-4b20-9a93-c5e31a3f18f0,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3ca2a5e47b8b4aa27ee321a368430538f1e7a10cc745764285f325ef61f326,PodSandboxId:96985e5c2c9fb2cd56c7d456d8b81875deba2f4cb158c03bb669d118fcbdcad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724669377641360105,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 569a0aa8-0b7f-48e8-9
857-7b842118128d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca,PodSandboxId:578fa2369817bed956940489ec2e905738179bd65a6654708b5e6dd8445b5080,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724669332230613970,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-jrwr8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 9e91fb1a-4430-468c-81e7-4017deff1c3c,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3,PodSandboxId:da592225294749f79c393e55503908fe7866a419b4c2d82c21be80ee7c822a92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724669286788747172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1241b73f-229a-41df-830b-18467fa1c581,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d,PodSandboxId:87f28fd3acff778798fac5002ccd0ae6057fb42566ec116d781c9f8d399d547f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724669284077620158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-wkxkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b66a68-1ed8-47c0-98fb-681f0fc08eca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413,PodSandboxId:0ae7cb019e3d91eb094ed590f8d46da77e059e58fdcdec68c62efc505dfcf173,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724669281751395254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbghq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041a740f-019e-4b5a-b615-018af363dbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5,PodSandboxId:d6aaa3a2860076119e487c1765f43180b7f146f7d06ca9b66057e0614995b19e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724669269105052961,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc5c75d55afd25cbf49f8c9c1515e02,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9,PodSandboxId:4bb95cb3cc16ea4224d1fbfd35500ce12bc9a1be9d36ef3b1ee5f50b75a6b5b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724669269080485351,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903aeb6456cc069c62974b42d8088a75,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283,PodSandboxId:324e6d2c78486f5ac780a357871fdcdbd206f3e28c1c4a3d2fffb8120a14e964,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNIN
G,CreatedAt:1724669269084405064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353ccf56fead8c783c0da330f049c6f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff,PodSandboxId:532ee159b1e2e85e95238bebbb451bf905edde72871b281799df73cc610dfa5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17246
69268878481058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-530639,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73c5d9ce1def0f6be0c13d9d869a4e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05a5536f-4a03-43ff-a54b-63e347c3aa4f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	133475e45b395       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   7c61481d4c53c       hello-world-app-55bf9c44b4-s42mb
	bfdc81d3d54b8       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         5 minutes ago       Running             nginx                     0                   ea9b34094dd38       nginx
	be3ca2a5e47b8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   96985e5c2c9fb       busybox
	121bffb9cc142       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   578fa2369817b       metrics-server-8988944d9-jrwr8
	5df9b3d6329be       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   da59222529474       storage-provisioner
	87dd1ca50a348       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   87f28fd3acff7       coredns-6f6b679f8f-wkxkf
	f706c8457e5a4       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        8 minutes ago       Running             kube-proxy                0                   0ae7cb019e3d9       kube-proxy-qbghq
	850d60ba14a0e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        8 minutes ago       Running             kube-controller-manager   0                   d6aaa3a286007       kube-controller-manager-addons-530639
	dbc3017a5018f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        8 minutes ago       Running             kube-apiserver            0                   324e6d2c78486       kube-apiserver-addons-530639
	c7ef02e3f3f47       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   4bb95cb3cc16e       etcd-addons-530639
	cf0248ce67564       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        8 minutes ago       Running             kube-scheduler            0                   532ee159b1e2e       kube-scheduler-addons-530639
	
	
	==> coredns [87dd1ca50a348a69c9d1b0c17d0199aabe23980762d54eb577ddb87a81ffe10d] <==
	[INFO] 10.244.0.6:55852 - 11952 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000164892s
	[INFO] 10.244.0.6:34625 - 29309 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115511s
	[INFO] 10.244.0.6:34625 - 12415 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109533s
	[INFO] 10.244.0.6:48051 - 33099 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086298s
	[INFO] 10.244.0.6:48051 - 6580 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000172189s
	[INFO] 10.244.0.6:42931 - 51868 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001587s
	[INFO] 10.244.0.6:42931 - 49050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000343912s
	[INFO] 10.244.0.6:56078 - 36944 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121876s
	[INFO] 10.244.0.6:56078 - 19309 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179768s
	[INFO] 10.244.0.6:56880 - 17373 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006441s
	[INFO] 10.244.0.6:56880 - 41690 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086919s
	[INFO] 10.244.0.6:54257 - 57543 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036253s
	[INFO] 10.244.0.6:54257 - 63429 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085994s
	[INFO] 10.244.0.6:37482 - 44331 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005419s
	[INFO] 10.244.0.6:37482 - 5160 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000083837s
	[INFO] 10.244.0.22:56868 - 42487 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000496744s
	[INFO] 10.244.0.22:51782 - 33332 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000104268s
	[INFO] 10.244.0.22:52598 - 48035 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136635s
	[INFO] 10.244.0.22:36639 - 25382 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157254s
	[INFO] 10.244.0.22:58956 - 16134 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000181509s
	[INFO] 10.244.0.22:48044 - 1700 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000177677s
	[INFO] 10.244.0.22:48615 - 29917 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000974955s
	[INFO] 10.244.0.22:48073 - 30258 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000530425s
	[INFO] 10.244.0.26:37839 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000559466s
	[INFO] 10.244.0.26:50142 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153453s
	
	
	==> describe nodes <==
	Name:               addons-530639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-530639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=addons-530639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T10_47_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-530639
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 10:47:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-530639
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 10:56:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 10:54:03 +0000   Mon, 26 Aug 2024 10:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 10:54:03 +0000   Mon, 26 Aug 2024 10:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 10:54:03 +0000   Mon, 26 Aug 2024 10:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 10:54:03 +0000   Mon, 26 Aug 2024 10:47:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    addons-530639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 18660512e3354c8b94a86e929f9b1e5f
	  System UUID:                18660512-e335-4c8b-94a8-6e929f9b1e5f
	  Boot ID:                    0105ed9d-b779-4196-ba39-b27baf284166
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m50s
	  default                     hello-world-app-55bf9c44b4-s42mb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 coredns-6f6b679f8f-wkxkf                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m25s
	  kube-system                 etcd-addons-530639                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m30s
	  kube-system                 kube-apiserver-addons-530639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-controller-manager-addons-530639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-proxy-qbghq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-scheduler-addons-530639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m20s  kube-proxy       
	  Normal  Starting                 8m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m30s  kubelet          Node addons-530639 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s  kubelet          Node addons-530639 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s  kubelet          Node addons-530639 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m29s  kubelet          Node addons-530639 status is now: NodeReady
	  Normal  RegisteredNode           8m26s  node-controller  Node addons-530639 event: Registered Node addons-530639 in Controller
	
	
	==> dmesg <==
	[Aug26 10:48] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.215658] kauditd_printk_skb: 145 callbacks suppressed
	[  +8.039213] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.179381] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.372386] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.805376] kauditd_printk_skb: 2 callbacks suppressed
	[Aug26 10:49] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.375674] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.151947] kauditd_printk_skb: 44 callbacks suppressed
	[  +8.367540] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.560554] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.170050] kauditd_printk_skb: 48 callbacks suppressed
	[ +13.674326] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.839545] kauditd_printk_skb: 2 callbacks suppressed
	[Aug26 10:50] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.952392] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.125009] kauditd_printk_skb: 49 callbacks suppressed
	[  +7.893956] kauditd_printk_skb: 43 callbacks suppressed
	[  +7.297447] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.127908] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.005216] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.928262] kauditd_printk_skb: 72 callbacks suppressed
	[Aug26 10:51] kauditd_printk_skb: 49 callbacks suppressed
	[Aug26 10:53] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.166493] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c7ef02e3f3f471a258c810ea613a7e61c00c695bbd17b5a51c9095fa4482f2a9] <==
	{"level":"info","ts":"2024-08-26T10:49:20.863934Z","caller":"traceutil/trace.go:171","msg":"trace[524759163] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1096; }","duration":"179.001135ms","start":"2024-08-26T10:49:20.684922Z","end":"2024-08-26T10:49:20.863923Z","steps":["trace[524759163] 'range keys from in-memory index tree'  (duration: 178.81365ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:49:20.976110Z","caller":"traceutil/trace.go:171","msg":"trace[273128302] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"139.70632ms","start":"2024-08-26T10:49:20.836385Z","end":"2024-08-26T10:49:20.976091Z","steps":["trace[273128302] 'process raft request'  (duration: 139.491341ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:49:20.983355Z","caller":"traceutil/trace.go:171","msg":"trace[1910683933] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"114.30175ms","start":"2024-08-26T10:49:20.869031Z","end":"2024-08-26T10:49:20.983333Z","steps":["trace[1910683933] 'process raft request'  (duration: 113.493673ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:49:31.723378Z","caller":"traceutil/trace.go:171","msg":"trace[82969359] transaction","detail":"{read_only:false; response_revision:1164; number_of_response:1; }","duration":"226.209751ms","start":"2024-08-26T10:49:31.497152Z","end":"2024-08-26T10:49:31.723361Z","steps":["trace[82969359] 'process raft request'  (duration: 225.752448ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:26.925559Z","caller":"traceutil/trace.go:171","msg":"trace[2143165711] linearizableReadLoop","detail":"{readStateIndex:1574; appliedIndex:1573; }","duration":"111.85999ms","start":"2024-08-26T10:50:26.813682Z","end":"2024-08-26T10:50:26.925542Z","steps":["trace[2143165711] 'read index received'  (duration: 111.678539ms)","trace[2143165711] 'applied index is now lower than readState.Index'  (duration: 180.99µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T10:50:26.925664Z","caller":"traceutil/trace.go:171","msg":"trace[717428587] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1521; }","duration":"136.051627ms","start":"2024-08-26T10:50:26.789606Z","end":"2024-08-26T10:50:26.925658Z","steps":["trace[717428587] 'process raft request'  (duration: 135.824777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T10:50:26.925923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.195427ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-26T10:50:26.925972Z","caller":"traceutil/trace.go:171","msg":"trace[1800604281] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1521; }","duration":"112.307538ms","start":"2024-08-26T10:50:26.813656Z","end":"2024-08-26T10:50:26.925964Z","steps":["trace[1800604281] 'agreement among raft nodes before linearized reading'  (duration: 112.172704ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:31.466516Z","caller":"traceutil/trace.go:171","msg":"trace[1925241876] linearizableReadLoop","detail":"{readStateIndex:1612; appliedIndex:1611; }","duration":"280.561173ms","start":"2024-08-26T10:50:31.185931Z","end":"2024-08-26T10:50:31.466492Z","steps":["trace[1925241876] 'read index received'  (duration: 280.413549ms)","trace[1925241876] 'applied index is now lower than readState.Index'  (duration: 147.002µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T10:50:31.466657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.712124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T10:50:31.466675Z","caller":"traceutil/trace.go:171","msg":"trace[1529817644] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1558; }","duration":"280.756023ms","start":"2024-08-26T10:50:31.185914Z","end":"2024-08-26T10:50:31.466670Z","steps":["trace[1529817644] 'agreement among raft nodes before linearized reading'  (duration: 280.6527ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:31.466921Z","caller":"traceutil/trace.go:171","msg":"trace[1623315541] transaction","detail":"{read_only:false; response_revision:1558; number_of_response:1; }","duration":"299.082237ms","start":"2024-08-26T10:50:31.167827Z","end":"2024-08-26T10:50:31.466910Z","steps":["trace[1623315541] 'process raft request'  (duration: 298.581965ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:33.611823Z","caller":"traceutil/trace.go:171","msg":"trace[1173019852] linearizableReadLoop","detail":"{readStateIndex:1615; appliedIndex:1614; }","duration":"130.313717ms","start":"2024-08-26T10:50:33.481495Z","end":"2024-08-26T10:50:33.611808Z","steps":["trace[1173019852] 'read index received'  (duration: 130.120378ms)","trace[1173019852] 'applied index is now lower than readState.Index'  (duration: 192.5µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T10:50:33.611933Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.420322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T10:50:33.611953Z","caller":"traceutil/trace.go:171","msg":"trace[432714119] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1560; }","duration":"130.45795ms","start":"2024-08-26T10:50:33.481490Z","end":"2024-08-26T10:50:33.611948Z","steps":["trace[432714119] 'agreement among raft nodes before linearized reading'  (duration: 130.39853ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:33.720134Z","caller":"traceutil/trace.go:171","msg":"trace[740183580] linearizableReadLoop","detail":"{readStateIndex:1616; appliedIndex:1615; }","duration":"107.119888ms","start":"2024-08-26T10:50:33.613001Z","end":"2024-08-26T10:50:33.720121Z","steps":["trace[740183580] 'read index received'  (duration: 105.131996ms)","trace[740183580] 'applied index is now lower than readState.Index'  (duration: 1.987494ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T10:50:33.720259Z","caller":"traceutil/trace.go:171","msg":"trace[307227972] transaction","detail":"{read_only:false; response_revision:1561; number_of_response:1; }","duration":"107.44277ms","start":"2024-08-26T10:50:33.612803Z","end":"2024-08-26T10:50:33.720245Z","steps":["trace[307227972] 'process raft request'  (duration: 105.43197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T10:50:33.720331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.31431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T10:50:33.720354Z","caller":"traceutil/trace.go:171","msg":"trace[1408992908] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1561; }","duration":"107.352183ms","start":"2024-08-26T10:50:33.612996Z","end":"2024-08-26T10:50:33.720348Z","steps":["trace[1408992908] 'agreement among raft nodes before linearized reading'  (duration: 107.266317ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:50.016616Z","caller":"traceutil/trace.go:171","msg":"trace[448022152] transaction","detail":"{read_only:false; response_revision:1667; number_of_response:1; }","duration":"348.950674ms","start":"2024-08-26T10:50:49.667608Z","end":"2024-08-26T10:50:50.016558Z","steps":["trace[448022152] 'process raft request'  (duration: 348.828505ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T10:50:50.016902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-26T10:50:49.667591Z","time spent":"349.158384ms","remote":"127.0.0.1:46290","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1633 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-08-26T10:50:50.017506Z","caller":"traceutil/trace.go:171","msg":"trace[1108140281] linearizableReadLoop","detail":"{readStateIndex:1727; appliedIndex:1727; }","duration":"204.104825ms","start":"2024-08-26T10:50:49.813391Z","end":"2024-08-26T10:50:50.017496Z","steps":["trace[1108140281] 'read index received'  (duration: 204.10041ms)","trace[1108140281] 'applied index is now lower than readState.Index'  (duration: 3.442µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T10:50:50.017693Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.293445ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-xfr8k\" ","response":"range_response_count:1 size:10376"}
	{"level":"info","ts":"2024-08-26T10:50:50.017718Z","caller":"traceutil/trace.go:171","msg":"trace[1471716803] range","detail":"{range_begin:/registry/pods/gadget/gadget-xfr8k; range_end:; response_count:1; response_revision:1667; }","duration":"204.326554ms","start":"2024-08-26T10:50:49.813386Z","end":"2024-08-26T10:50:50.017713Z","steps":["trace[1471716803] 'agreement among raft nodes before linearized reading'  (duration: 204.151834ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T10:50:50.022492Z","caller":"traceutil/trace.go:171","msg":"trace[1105980089] transaction","detail":"{read_only:false; response_revision:1668; number_of_response:1; }","duration":"206.381948ms","start":"2024-08-26T10:50:49.816093Z","end":"2024-08-26T10:50:50.022475Z","steps":["trace[1105980089] 'process raft request'  (duration: 206.311908ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:56:24 up 9 min,  0 users,  load average: 0.16, 0.53, 0.43
	Linux addons-530639 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dbc3017a5018f6843a45486bd72369945b9cbe4f41f49b5f8032a05bc0e17283] <==
	E0826 10:50:01.787679       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0826 10:50:01.788954       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0826 10:50:01.790406       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.916505ms" method="GET" path="/api/v1/pods" result=null
	I0826 10:50:21.539546       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.219.107"}
	E0826 10:50:33.721846       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0826 10:50:42.251872       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0826 10:50:57.733999       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0826 10:50:58.852809       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0826 10:50:59.242864       1 watch.go:250] "Unhandled Error" err="write tcp 192.168.39.11:8443->10.244.0.17:43406: write: connection reset by peer" logger="UnhandledError"
	I0826 10:51:01.749646       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0826 10:51:01.943459       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.137.75"}
	I0826 10:51:05.883527       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:05.883581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0826 10:51:05.925932       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:05.925977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0826 10:51:05.935635       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:05.935856       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0826 10:51:05.970054       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:05.970724       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0826 10:51:06.025827       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0826 10:51:06.025958       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0826 10:51:06.934630       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0826 10:51:07.027311       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0826 10:51:07.199012       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0826 10:53:24.484691       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.97.205"}
	
	
	==> kube-controller-manager [850d60ba14a0e6130d4745275ab9ac327c32992e28617e520bc8a54afb585ba5] <==
	W0826 10:54:39.289348       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:54:39.289563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:54:45.121040       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:54:45.121157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:54:46.603948       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:54:46.604026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:54:52.378387       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:54:52.378476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:55:15.639517       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:55:15.639599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:55:24.891817       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:55:24.891860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:55:32.821722       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:55:32.821910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:55:38.212485       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:55:38.212626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:56:00.830833       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:56:00.831027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:56:15.577101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:56:15.577257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:56:15.896829       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:56:15.896906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0826 10:56:21.473253       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0826 10:56:21.473306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0826 10:56:22.965541       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="11.518µs"
	
	
	==> kube-proxy [f706c8457e5a4ae3a67e85f2a5ffb4685e10bf996775c1c278d33e8495e69413] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 10:48:03.807402       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 10:48:03.837882       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.11"]
	E0826 10:48:03.837957       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 10:48:03.925794       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 10:48:03.925830       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 10:48:03.925856       1 server_linux.go:169] "Using iptables Proxier"
	I0826 10:48:03.932881       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 10:48:03.933093       1 server.go:483] "Version info" version="v1.31.0"
	I0826 10:48:03.933102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 10:48:03.934378       1 config.go:197] "Starting service config controller"
	I0826 10:48:03.934400       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 10:48:03.934434       1 config.go:104] "Starting endpoint slice config controller"
	I0826 10:48:03.934439       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 10:48:03.934870       1 config.go:326] "Starting node config controller"
	I0826 10:48:03.934877       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 10:48:04.035238       1 shared_informer.go:320] Caches are synced for node config
	I0826 10:48:04.035289       1 shared_informer.go:320] Caches are synced for service config
	I0826 10:48:04.035328       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cf0248ce67564e57e469c5c3a550ba81a8d0ee75113804e1c617e3abf857e8ff] <==
	W0826 10:47:51.591515       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 10:47:51.591561       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 10:47:52.476933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0826 10:47:52.477013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.610460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 10:47:52.610603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.640226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 10:47:52.640357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.691788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0826 10:47:52.691923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.773574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 10:47:52.773971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.800991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 10:47:52.801128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.819877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 10:47:52.820026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.851897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0826 10:47:52.852042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.913578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 10:47:52.914170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.922023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 10:47:52.922220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 10:47:52.962778       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 10:47:52.962870       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0826 10:47:55.068252       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 10:55:44 addons-530639 kubelet[1221]: E0826 10:55:44.852173    1221 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669744851413731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:55:54 addons-530639 kubelet[1221]: E0826 10:55:54.580477    1221 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 10:55:54 addons-530639 kubelet[1221]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 10:55:54 addons-530639 kubelet[1221]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 10:55:54 addons-530639 kubelet[1221]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 10:55:54 addons-530639 kubelet[1221]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 10:55:54 addons-530639 kubelet[1221]: E0826 10:55:54.854912    1221 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669754854490038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:55:54 addons-530639 kubelet[1221]: E0826 10:55:54.855119    1221 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669754854490038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:56:04 addons-530639 kubelet[1221]: I0826 10:56:04.550273    1221 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 26 10:56:04 addons-530639 kubelet[1221]: E0826 10:56:04.858638    1221 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669764857806549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:56:04 addons-530639 kubelet[1221]: E0826 10:56:04.858771    1221 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669764857806549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:56:14 addons-530639 kubelet[1221]: E0826 10:56:14.861649    1221 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669774861307578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:56:14 addons-530639 kubelet[1221]: E0826 10:56:14.861693    1221 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724669774861307578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 10:56:22 addons-530639 kubelet[1221]: I0826 10:56:22.996173    1221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-s42mb" podStartSLOduration=176.592447042 podStartE2EDuration="2m58.996147282s" podCreationTimestamp="2024-08-26 10:53:24 +0000 UTC" firstStartedPulling="2024-08-26 10:53:24.888686561 +0000 UTC m=+330.488717581" lastFinishedPulling="2024-08-26 10:53:27.292386795 +0000 UTC m=+332.892417821" observedRunningTime="2024-08-26 10:53:27.70410282 +0000 UTC m=+333.304133855" watchObservedRunningTime="2024-08-26 10:56:22.996147282 +0000 UTC m=+508.596178318"
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.333991    1221 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9e91fb1a-4430-468c-81e7-4017deff1c3c-tmp-dir\") pod \"9e91fb1a-4430-468c-81e7-4017deff1c3c\" (UID: \"9e91fb1a-4430-468c-81e7-4017deff1c3c\") "
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.334036    1221 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jz7j\" (UniqueName: \"kubernetes.io/projected/9e91fb1a-4430-468c-81e7-4017deff1c3c-kube-api-access-9jz7j\") pod \"9e91fb1a-4430-468c-81e7-4017deff1c3c\" (UID: \"9e91fb1a-4430-468c-81e7-4017deff1c3c\") "
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.334501    1221 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e91fb1a-4430-468c-81e7-4017deff1c3c-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9e91fb1a-4430-468c-81e7-4017deff1c3c" (UID: "9e91fb1a-4430-468c-81e7-4017deff1c3c"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.336347    1221 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e91fb1a-4430-468c-81e7-4017deff1c3c-kube-api-access-9jz7j" (OuterVolumeSpecName: "kube-api-access-9jz7j") pod "9e91fb1a-4430-468c-81e7-4017deff1c3c" (UID: "9e91fb1a-4430-468c-81e7-4017deff1c3c"). InnerVolumeSpecName "kube-api-access-9jz7j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.348171    1221 scope.go:117] "RemoveContainer" containerID="121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca"
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.381122    1221 scope.go:117] "RemoveContainer" containerID="121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca"
	Aug 26 10:56:24 addons-530639 kubelet[1221]: E0826 10:56:24.382128    1221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca\": container with ID starting with 121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca not found: ID does not exist" containerID="121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca"
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.382163    1221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca"} err="failed to get container status \"121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca\": rpc error: code = NotFound desc = could not find container \"121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca\": container with ID starting with 121bffb9cc142eee710aa3911390c8686aec7302a5eb6244f6bb27ae1b03fcca not found: ID does not exist"
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.435352    1221 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9jz7j\" (UniqueName: \"kubernetes.io/projected/9e91fb1a-4430-468c-81e7-4017deff1c3c-kube-api-access-9jz7j\") on node \"addons-530639\" DevicePath \"\""
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.435422    1221 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9e91fb1a-4430-468c-81e7-4017deff1c3c-tmp-dir\") on node \"addons-530639\" DevicePath \"\""
	Aug 26 10:56:24 addons-530639 kubelet[1221]: I0826 10:56:24.552065    1221 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e91fb1a-4430-468c-81e7-4017deff1c3c" path="/var/lib/kubelet/pods/9e91fb1a-4430-468c-81e7-4017deff1c3c/volumes"
	
	
	==> storage-provisioner [5df9b3d6329bef4a722004fdeb1452de2790887aac09304b964f1bf0e6335ba3] <==
	I0826 10:48:07.590913       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 10:48:07.646798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 10:48:07.646867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 10:48:07.718875       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 10:48:07.722902       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-530639_bdd99f87-1a21-4df7-8f25-a08507efa6ee!
	I0826 10:48:07.727950       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70f5968a-b06f-4f29-9b7f-a8947c63df74", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-530639_bdd99f87-1a21-4df7-8f25-a08507efa6ee became leader
	I0826 10:48:07.894826       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-530639_bdd99f87-1a21-4df7-8f25-a08507efa6ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-530639 -n addons-530639
helpers_test.go:261: (dbg) Run:  kubectl --context addons-530639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (358.23s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-530639
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-530639: exit status 82 (2m0.483977182s)

                                                
                                                
-- stdout --
	* Stopping node "addons-530639"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-530639" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-530639
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-530639: exit status 11 (21.652219462s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-530639" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-530639
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-530639: exit status 11 (6.144494261s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-530639" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-530639
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-530639: exit status 11 (6.14399635s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-530639" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 node stop m02 -v=7 --alsologtostderr
E0826 11:08:01.453391  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:08:42.415535  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:09:34.326717  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.482520518s)

                                                
                                                
-- stdout --
	* Stopping node "ha-055395-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:07:44.899691  121019 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:07:44.899973  121019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:07:44.899983  121019 out.go:358] Setting ErrFile to fd 2...
	I0826 11:07:44.899989  121019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:07:44.900165  121019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:07:44.900511  121019 mustload.go:65] Loading cluster: ha-055395
	I0826 11:07:44.900943  121019 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:07:44.900961  121019 stop.go:39] StopHost: ha-055395-m02
	I0826 11:07:44.901386  121019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:07:44.901438  121019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:07:44.918098  121019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I0826 11:07:44.918620  121019 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:07:44.919274  121019 main.go:141] libmachine: Using API Version  1
	I0826 11:07:44.919305  121019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:07:44.919631  121019 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:07:44.922039  121019 out.go:177] * Stopping node "ha-055395-m02"  ...
	I0826 11:07:44.923557  121019 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0826 11:07:44.923594  121019 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:07:44.923918  121019 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0826 11:07:44.923967  121019 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:07:44.927485  121019 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:07:44.928012  121019 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:07:44.928043  121019 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:07:44.928220  121019 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:07:44.928680  121019 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:07:44.928885  121019 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:07:44.929064  121019 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:07:45.018251  121019 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0826 11:07:45.072720  121019 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0826 11:07:45.127505  121019 main.go:141] libmachine: Stopping "ha-055395-m02"...
	I0826 11:07:45.127543  121019 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:07:45.129101  121019 main.go:141] libmachine: (ha-055395-m02) Calling .Stop
	I0826 11:07:45.133242  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 0/120
	I0826 11:07:46.134718  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 1/120
	I0826 11:07:47.136048  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 2/120
	I0826 11:07:48.137563  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 3/120
	I0826 11:07:49.138928  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 4/120
	I0826 11:07:50.140620  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 5/120
	I0826 11:07:51.142092  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 6/120
	I0826 11:07:52.143662  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 7/120
	I0826 11:07:53.145434  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 8/120
	I0826 11:07:54.147119  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 9/120
	I0826 11:07:55.149602  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 10/120
	I0826 11:07:56.151103  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 11/120
	I0826 11:07:57.152624  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 12/120
	I0826 11:07:58.154184  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 13/120
	I0826 11:07:59.155934  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 14/120
	I0826 11:08:00.157481  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 15/120
	I0826 11:08:01.158784  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 16/120
	I0826 11:08:02.160455  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 17/120
	I0826 11:08:03.161829  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 18/120
	I0826 11:08:04.163253  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 19/120
	I0826 11:08:05.165253  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 20/120
	I0826 11:08:06.166797  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 21/120
	I0826 11:08:07.168102  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 22/120
	I0826 11:08:08.169463  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 23/120
	I0826 11:08:09.170760  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 24/120
	I0826 11:08:10.172604  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 25/120
	I0826 11:08:11.174139  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 26/120
	I0826 11:08:12.176066  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 27/120
	I0826 11:08:13.177527  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 28/120
	I0826 11:08:14.179080  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 29/120
	I0826 11:08:15.181210  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 30/120
	I0826 11:08:16.182756  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 31/120
	I0826 11:08:17.184027  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 32/120
	I0826 11:08:18.185905  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 33/120
	I0826 11:08:19.187484  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 34/120
	I0826 11:08:20.189260  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 35/120
	I0826 11:08:21.190890  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 36/120
	I0826 11:08:22.192288  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 37/120
	I0826 11:08:23.193881  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 38/120
	I0826 11:08:24.195528  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 39/120
	I0826 11:08:25.196971  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 40/120
	I0826 11:08:26.198428  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 41/120
	I0826 11:08:27.199972  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 42/120
	I0826 11:08:28.202574  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 43/120
	I0826 11:08:29.204108  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 44/120
	I0826 11:08:30.205864  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 45/120
	I0826 11:08:31.207375  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 46/120
	I0826 11:08:32.209489  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 47/120
	I0826 11:08:33.211101  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 48/120
	I0826 11:08:34.213548  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 49/120
	I0826 11:08:35.215537  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 50/120
	I0826 11:08:36.217401  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 51/120
	I0826 11:08:37.219070  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 52/120
	I0826 11:08:38.221620  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 53/120
	I0826 11:08:39.223153  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 54/120
	I0826 11:08:40.225333  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 55/120
	I0826 11:08:41.226662  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 56/120
	I0826 11:08:42.228036  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 57/120
	I0826 11:08:43.229453  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 58/120
	I0826 11:08:44.230691  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 59/120
	I0826 11:08:45.232813  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 60/120
	I0826 11:08:46.234398  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 61/120
	I0826 11:08:47.235805  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 62/120
	I0826 11:08:48.237269  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 63/120
	I0826 11:08:49.238911  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 64/120
	I0826 11:08:50.240934  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 65/120
	I0826 11:08:51.242622  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 66/120
	I0826 11:08:52.244087  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 67/120
	I0826 11:08:53.246199  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 68/120
	I0826 11:08:54.247757  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 69/120
	I0826 11:08:55.249185  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 70/120
	I0826 11:08:56.250814  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 71/120
	I0826 11:08:57.252165  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 72/120
	I0826 11:08:58.253886  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 73/120
	I0826 11:08:59.255489  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 74/120
	I0826 11:09:00.257398  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 75/120
	I0826 11:09:01.259819  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 76/120
	I0826 11:09:02.261639  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 77/120
	I0826 11:09:03.263103  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 78/120
	I0826 11:09:04.265388  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 79/120
	I0826 11:09:05.267859  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 80/120
	I0826 11:09:06.269785  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 81/120
	I0826 11:09:07.271346  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 82/120
	I0826 11:09:08.273445  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 83/120
	I0826 11:09:09.274933  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 84/120
	I0826 11:09:10.276989  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 85/120
	I0826 11:09:11.278980  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 86/120
	I0826 11:09:12.280885  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 87/120
	I0826 11:09:13.282379  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 88/120
	I0826 11:09:14.283869  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 89/120
	I0826 11:09:15.286138  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 90/120
	I0826 11:09:16.287666  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 91/120
	I0826 11:09:17.289391  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 92/120
	I0826 11:09:18.291610  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 93/120
	I0826 11:09:19.293639  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 94/120
	I0826 11:09:20.295485  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 95/120
	I0826 11:09:21.297247  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 96/120
	I0826 11:09:22.298904  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 97/120
	I0826 11:09:23.300161  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 98/120
	I0826 11:09:24.301709  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 99/120
	I0826 11:09:25.303911  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 100/120
	I0826 11:09:26.305273  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 101/120
	I0826 11:09:27.306871  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 102/120
	I0826 11:09:28.308187  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 103/120
	I0826 11:09:29.309868  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 104/120
	I0826 11:09:30.311642  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 105/120
	I0826 11:09:31.313431  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 106/120
	I0826 11:09:32.315244  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 107/120
	I0826 11:09:33.316779  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 108/120
	I0826 11:09:34.318287  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 109/120
	I0826 11:09:35.320654  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 110/120
	I0826 11:09:36.322054  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 111/120
	I0826 11:09:37.324059  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 112/120
	I0826 11:09:38.325614  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 113/120
	I0826 11:09:39.327189  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 114/120
	I0826 11:09:40.329340  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 115/120
	I0826 11:09:41.330863  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 116/120
	I0826 11:09:42.332470  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 117/120
	I0826 11:09:43.334486  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 118/120
	I0826 11:09:44.336366  121019 main.go:141] libmachine: (ha-055395-m02) Waiting for machine to stop 119/120
	I0826 11:09:45.337553  121019 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0826 11:09:45.337695  121019 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-055395 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
E0826 11:10:04.337836  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 3 (19.247906508s)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-055395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:09:45.384243  121443 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:09:45.384391  121443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:09:45.384402  121443 out.go:358] Setting ErrFile to fd 2...
	I0826 11:09:45.384407  121443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:09:45.384583  121443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:09:45.384867  121443 out.go:352] Setting JSON to false
	I0826 11:09:45.384899  121443 mustload.go:65] Loading cluster: ha-055395
	I0826 11:09:45.384948  121443 notify.go:220] Checking for updates...
	I0826 11:09:45.385322  121443 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:09:45.385340  121443 status.go:255] checking status of ha-055395 ...
	I0826 11:09:45.385784  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:09:45.385855  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:09:45.402988  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I0826 11:09:45.403588  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:09:45.404311  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:09:45.404355  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:09:45.404748  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:09:45.405011  121443 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:09:45.407347  121443 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:09:45.407381  121443 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:09:45.407830  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:09:45.407894  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:09:45.425367  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0826 11:09:45.425829  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:09:45.426402  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:09:45.426425  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:09:45.426822  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:09:45.427056  121443 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:09:45.430360  121443 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:09:45.430797  121443 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:09:45.430822  121443 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:09:45.431029  121443 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:09:45.431327  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:09:45.431368  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:09:45.446564  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0826 11:09:45.447092  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:09:45.447696  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:09:45.447740  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:09:45.448101  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:09:45.448334  121443 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:09:45.448621  121443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:09:45.448668  121443 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:09:45.451918  121443 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:09:45.452325  121443 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:09:45.452350  121443 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:09:45.452484  121443 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:09:45.452651  121443 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:09:45.452874  121443 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:09:45.453024  121443 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:09:45.541840  121443 ssh_runner.go:195] Run: systemctl --version
	I0826 11:09:45.549467  121443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:09:45.572559  121443 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:09:45.572608  121443 api_server.go:166] Checking apiserver status ...
	I0826 11:09:45.572663  121443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:09:45.589836  121443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0826 11:09:45.601592  121443 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:09:45.601650  121443 ssh_runner.go:195] Run: ls
	I0826 11:09:45.607146  121443 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:09:45.613159  121443 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:09:45.613195  121443 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:09:45.613210  121443 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:09:45.613234  121443 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:09:45.613599  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:09:45.613641  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:09:45.629357  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0826 11:09:45.629829  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:09:45.630360  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:09:45.630383  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:09:45.630794  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:09:45.631155  121443 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:09:45.632896  121443 status.go:330] ha-055395-m02 host status = "Running" (err=<nil>)
	I0826 11:09:45.632914  121443 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:09:45.633247  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:09:45.633295  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:09:45.649519  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0826 11:09:45.649919  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:09:45.650520  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:09:45.650550  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:09:45.650923  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:09:45.651158  121443 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:09:45.653993  121443 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:09:45.654472  121443 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:09:45.654504  121443 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:09:45.654657  121443 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:09:45.654996  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:09:45.655033  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:09:45.670269  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35315
	I0826 11:09:45.670679  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:09:45.671143  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:09:45.671164  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:09:45.671514  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:09:45.671703  121443 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:09:45.671864  121443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:09:45.671883  121443 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:09:45.675242  121443 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:09:45.675736  121443 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:09:45.675765  121443 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:09:45.675946  121443 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:09:45.676129  121443 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:09:45.676280  121443 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:09:45.676419  121443 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	W0826 11:10:04.207045  121443 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:04.207156  121443 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0826 11:10:04.207171  121443 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:04.207178  121443 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 11:10:04.207208  121443 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:04.207215  121443 status.go:255] checking status of ha-055395-m03 ...
	I0826 11:10:04.207529  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:04.207572  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:04.223182  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
	I0826 11:10:04.223635  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:04.224225  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:10:04.224266  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:04.224720  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:04.224944  121443 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:10:04.226778  121443 status.go:330] ha-055395-m03 host status = "Running" (err=<nil>)
	I0826 11:10:04.226798  121443 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:04.227121  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:04.227177  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:04.242686  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0826 11:10:04.243141  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:04.243688  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:10:04.243709  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:04.244011  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:04.244227  121443 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:10:04.247075  121443 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:04.247547  121443 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:04.247588  121443 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:04.247717  121443 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:04.248033  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:04.248072  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:04.264917  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0826 11:10:04.265476  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:04.266007  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:10:04.266034  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:04.266369  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:04.266636  121443 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:10:04.266932  121443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:04.266972  121443 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:10:04.269672  121443 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:04.270178  121443 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:04.270212  121443 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:04.270483  121443 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:10:04.270810  121443 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:10:04.271014  121443 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:10:04.271200  121443 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:10:04.360378  121443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:04.377898  121443 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:04.377931  121443 api_server.go:166] Checking apiserver status ...
	I0826 11:10:04.377979  121443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:04.394517  121443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	W0826 11:10:04.404881  121443 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:04.404937  121443 ssh_runner.go:195] Run: ls
	I0826 11:10:04.411004  121443 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:04.415751  121443 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:04.415779  121443 status.go:422] ha-055395-m03 apiserver status = Running (err=<nil>)
	I0826 11:10:04.415788  121443 status.go:257] ha-055395-m03 status: &{Name:ha-055395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:04.415805  121443 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:10:04.416143  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:04.416182  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:04.432627  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0826 11:10:04.433108  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:04.433629  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:10:04.433675  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:04.434100  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:04.434309  121443 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:10:04.436063  121443 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:10:04.436084  121443 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:04.436422  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:04.436461  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:04.452355  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0826 11:10:04.452830  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:04.453320  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:10:04.453343  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:04.453676  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:04.453897  121443 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:10:04.457051  121443 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:04.457438  121443 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:04.457467  121443 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:04.457762  121443 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:04.458122  121443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:04.458162  121443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:04.473732  121443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45957
	I0826 11:10:04.474178  121443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:04.474688  121443 main.go:141] libmachine: Using API Version  1
	I0826 11:10:04.474722  121443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:04.475124  121443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:04.475360  121443 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:10:04.475618  121443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:04.475642  121443 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:10:04.478710  121443 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:04.479145  121443 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:04.479173  121443 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:04.479353  121443 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:10:04.479572  121443 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:10:04.479745  121443 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:10:04.479901  121443 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:10:04.567743  121443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:04.584373  121443 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-055395 -n ha-055395
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-055395 logs -n 25: (1.382461712s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395:/home/docker/cp-test_ha-055395-m03_ha-055395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395 sudo cat                                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m02:/home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m02 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04:/home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m04 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp testdata/cp-test.txt                                                | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395:/home/docker/cp-test_ha-055395-m04_ha-055395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395 sudo cat                                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m02:/home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m02 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03:/home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m03 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-055395 node stop m02 -v=7                                                     | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 11:03:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 11:03:09.834067  117024 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:03:09.834452  117024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:03:09.834464  117024 out.go:358] Setting ErrFile to fd 2...
	I0826 11:03:09.834471  117024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:03:09.834703  117024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:03:09.835384  117024 out.go:352] Setting JSON to false
	I0826 11:03:09.836326  117024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2731,"bootTime":1724667459,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:03:09.836399  117024 start.go:139] virtualization: kvm guest
	I0826 11:03:09.838707  117024 out.go:177] * [ha-055395] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:03:09.840354  117024 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:03:09.840456  117024 notify.go:220] Checking for updates...
	I0826 11:03:09.843077  117024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:03:09.844558  117024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:03:09.845871  117024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:09.847213  117024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:03:09.848484  117024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:03:09.850036  117024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:03:09.886784  117024 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 11:03:09.888406  117024 start.go:297] selected driver: kvm2
	I0826 11:03:09.888434  117024 start.go:901] validating driver "kvm2" against <nil>
	I0826 11:03:09.888446  117024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:03:09.889211  117024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:03:09.889284  117024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:03:09.905954  117024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:03:09.906005  117024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 11:03:09.906210  117024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:03:09.906246  117024 cni.go:84] Creating CNI manager for ""
	I0826 11:03:09.906258  117024 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0826 11:03:09.906266  117024 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0826 11:03:09.906313  117024 start.go:340] cluster config:
	{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0826 11:03:09.906422  117024 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:03:09.908564  117024 out.go:177] * Starting "ha-055395" primary control-plane node in "ha-055395" cluster
	I0826 11:03:09.909846  117024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:03:09.909889  117024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:03:09.909896  117024 cache.go:56] Caching tarball of preloaded images
	I0826 11:03:09.909993  117024 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:03:09.910005  117024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:03:09.910292  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:03:09.910312  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json: {Name:mk57a761cf1d0c8f62f7f6828100d65bc5ffba3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:09.910450  117024 start.go:360] acquireMachinesLock for ha-055395: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:03:09.910485  117024 start.go:364] duration metric: took 22.171µs to acquireMachinesLock for "ha-055395"
	I0826 11:03:09.910502  117024 start.go:93] Provisioning new machine with config: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:03:09.910574  117024 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 11:03:09.912342  117024 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 11:03:09.912478  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:09.912503  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:09.927829  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46529
	I0826 11:03:09.928348  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:09.928999  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:09.929030  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:09.929451  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:09.929667  117024 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:03:09.929851  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:09.930021  117024 start.go:159] libmachine.API.Create for "ha-055395" (driver="kvm2")
	I0826 11:03:09.930074  117024 client.go:168] LocalClient.Create starting
	I0826 11:03:09.930124  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 11:03:09.930164  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:03:09.930183  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:03:09.930256  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 11:03:09.930289  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:03:09.930306  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:03:09.930335  117024 main.go:141] libmachine: Running pre-create checks...
	I0826 11:03:09.930346  117024 main.go:141] libmachine: (ha-055395) Calling .PreCreateCheck
	I0826 11:03:09.930719  117024 main.go:141] libmachine: (ha-055395) Calling .GetConfigRaw
	I0826 11:03:09.931257  117024 main.go:141] libmachine: Creating machine...
	I0826 11:03:09.931270  117024 main.go:141] libmachine: (ha-055395) Calling .Create
	I0826 11:03:09.931409  117024 main.go:141] libmachine: (ha-055395) Creating KVM machine...
	I0826 11:03:09.933244  117024 main.go:141] libmachine: (ha-055395) DBG | found existing default KVM network
	I0826 11:03:09.934337  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:09.934167  117048 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c00}
	I0826 11:03:09.934398  117024 main.go:141] libmachine: (ha-055395) DBG | created network xml: 
	I0826 11:03:09.934427  117024 main.go:141] libmachine: (ha-055395) DBG | <network>
	I0826 11:03:09.934437  117024 main.go:141] libmachine: (ha-055395) DBG |   <name>mk-ha-055395</name>
	I0826 11:03:09.934443  117024 main.go:141] libmachine: (ha-055395) DBG |   <dns enable='no'/>
	I0826 11:03:09.934451  117024 main.go:141] libmachine: (ha-055395) DBG |   
	I0826 11:03:09.934459  117024 main.go:141] libmachine: (ha-055395) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0826 11:03:09.934482  117024 main.go:141] libmachine: (ha-055395) DBG |     <dhcp>
	I0826 11:03:09.934506  117024 main.go:141] libmachine: (ha-055395) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0826 11:03:09.934583  117024 main.go:141] libmachine: (ha-055395) DBG |     </dhcp>
	I0826 11:03:09.934616  117024 main.go:141] libmachine: (ha-055395) DBG |   </ip>
	I0826 11:03:09.934629  117024 main.go:141] libmachine: (ha-055395) DBG |   
	I0826 11:03:09.934641  117024 main.go:141] libmachine: (ha-055395) DBG | </network>
	I0826 11:03:09.934651  117024 main.go:141] libmachine: (ha-055395) DBG | 
	I0826 11:03:09.939867  117024 main.go:141] libmachine: (ha-055395) DBG | trying to create private KVM network mk-ha-055395 192.168.39.0/24...
	I0826 11:03:10.013535  117024 main.go:141] libmachine: (ha-055395) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395 ...
	I0826 11:03:10.013578  117024 main.go:141] libmachine: (ha-055395) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 11:03:10.013593  117024 main.go:141] libmachine: (ha-055395) DBG | private KVM network mk-ha-055395 192.168.39.0/24 created
	I0826 11:03:10.013610  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:10.013438  117048 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:10.013629  117024 main.go:141] libmachine: (ha-055395) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 11:03:10.292908  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:10.292769  117048 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa...
	I0826 11:03:10.387887  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:10.387727  117048 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/ha-055395.rawdisk...
	I0826 11:03:10.387917  117024 main.go:141] libmachine: (ha-055395) DBG | Writing magic tar header
	I0826 11:03:10.387930  117024 main.go:141] libmachine: (ha-055395) DBG | Writing SSH key tar header
	I0826 11:03:10.387941  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:10.387879  117048 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395 ...
	I0826 11:03:10.387956  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395
	I0826 11:03:10.387973  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395 (perms=drwx------)
	I0826 11:03:10.387995  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 11:03:10.388005  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 11:03:10.388019  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 11:03:10.388033  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 11:03:10.388119  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 11:03:10.388156  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:10.388164  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 11:03:10.388179  117024 main.go:141] libmachine: (ha-055395) Creating domain...
	I0826 11:03:10.388224  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 11:03:10.388258  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 11:03:10.388274  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins
	I0826 11:03:10.388288  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home
	I0826 11:03:10.388313  117024 main.go:141] libmachine: (ha-055395) DBG | Skipping /home - not owner
	I0826 11:03:10.389256  117024 main.go:141] libmachine: (ha-055395) define libvirt domain using xml: 
	I0826 11:03:10.389297  117024 main.go:141] libmachine: (ha-055395) <domain type='kvm'>
	I0826 11:03:10.389308  117024 main.go:141] libmachine: (ha-055395)   <name>ha-055395</name>
	I0826 11:03:10.389320  117024 main.go:141] libmachine: (ha-055395)   <memory unit='MiB'>2200</memory>
	I0826 11:03:10.389330  117024 main.go:141] libmachine: (ha-055395)   <vcpu>2</vcpu>
	I0826 11:03:10.389339  117024 main.go:141] libmachine: (ha-055395)   <features>
	I0826 11:03:10.389353  117024 main.go:141] libmachine: (ha-055395)     <acpi/>
	I0826 11:03:10.389361  117024 main.go:141] libmachine: (ha-055395)     <apic/>
	I0826 11:03:10.389371  117024 main.go:141] libmachine: (ha-055395)     <pae/>
	I0826 11:03:10.389383  117024 main.go:141] libmachine: (ha-055395)     
	I0826 11:03:10.389405  117024 main.go:141] libmachine: (ha-055395)   </features>
	I0826 11:03:10.389421  117024 main.go:141] libmachine: (ha-055395)   <cpu mode='host-passthrough'>
	I0826 11:03:10.389427  117024 main.go:141] libmachine: (ha-055395)   
	I0826 11:03:10.389435  117024 main.go:141] libmachine: (ha-055395)   </cpu>
	I0826 11:03:10.389440  117024 main.go:141] libmachine: (ha-055395)   <os>
	I0826 11:03:10.389447  117024 main.go:141] libmachine: (ha-055395)     <type>hvm</type>
	I0826 11:03:10.389453  117024 main.go:141] libmachine: (ha-055395)     <boot dev='cdrom'/>
	I0826 11:03:10.389461  117024 main.go:141] libmachine: (ha-055395)     <boot dev='hd'/>
	I0826 11:03:10.389466  117024 main.go:141] libmachine: (ha-055395)     <bootmenu enable='no'/>
	I0826 11:03:10.389473  117024 main.go:141] libmachine: (ha-055395)   </os>
	I0826 11:03:10.389478  117024 main.go:141] libmachine: (ha-055395)   <devices>
	I0826 11:03:10.389485  117024 main.go:141] libmachine: (ha-055395)     <disk type='file' device='cdrom'>
	I0826 11:03:10.389496  117024 main.go:141] libmachine: (ha-055395)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/boot2docker.iso'/>
	I0826 11:03:10.389505  117024 main.go:141] libmachine: (ha-055395)       <target dev='hdc' bus='scsi'/>
	I0826 11:03:10.389510  117024 main.go:141] libmachine: (ha-055395)       <readonly/>
	I0826 11:03:10.389517  117024 main.go:141] libmachine: (ha-055395)     </disk>
	I0826 11:03:10.389524  117024 main.go:141] libmachine: (ha-055395)     <disk type='file' device='disk'>
	I0826 11:03:10.389531  117024 main.go:141] libmachine: (ha-055395)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 11:03:10.389539  117024 main.go:141] libmachine: (ha-055395)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/ha-055395.rawdisk'/>
	I0826 11:03:10.389546  117024 main.go:141] libmachine: (ha-055395)       <target dev='hda' bus='virtio'/>
	I0826 11:03:10.389573  117024 main.go:141] libmachine: (ha-055395)     </disk>
	I0826 11:03:10.389601  117024 main.go:141] libmachine: (ha-055395)     <interface type='network'>
	I0826 11:03:10.389613  117024 main.go:141] libmachine: (ha-055395)       <source network='mk-ha-055395'/>
	I0826 11:03:10.389624  117024 main.go:141] libmachine: (ha-055395)       <model type='virtio'/>
	I0826 11:03:10.389643  117024 main.go:141] libmachine: (ha-055395)     </interface>
	I0826 11:03:10.389654  117024 main.go:141] libmachine: (ha-055395)     <interface type='network'>
	I0826 11:03:10.389662  117024 main.go:141] libmachine: (ha-055395)       <source network='default'/>
	I0826 11:03:10.389674  117024 main.go:141] libmachine: (ha-055395)       <model type='virtio'/>
	I0826 11:03:10.389693  117024 main.go:141] libmachine: (ha-055395)     </interface>
	I0826 11:03:10.389713  117024 main.go:141] libmachine: (ha-055395)     <serial type='pty'>
	I0826 11:03:10.389726  117024 main.go:141] libmachine: (ha-055395)       <target port='0'/>
	I0826 11:03:10.389735  117024 main.go:141] libmachine: (ha-055395)     </serial>
	I0826 11:03:10.389750  117024 main.go:141] libmachine: (ha-055395)     <console type='pty'>
	I0826 11:03:10.389763  117024 main.go:141] libmachine: (ha-055395)       <target type='serial' port='0'/>
	I0826 11:03:10.389774  117024 main.go:141] libmachine: (ha-055395)     </console>
	I0826 11:03:10.389791  117024 main.go:141] libmachine: (ha-055395)     <rng model='virtio'>
	I0826 11:03:10.389804  117024 main.go:141] libmachine: (ha-055395)       <backend model='random'>/dev/random</backend>
	I0826 11:03:10.389821  117024 main.go:141] libmachine: (ha-055395)     </rng>
	I0826 11:03:10.389834  117024 main.go:141] libmachine: (ha-055395)     
	I0826 11:03:10.389842  117024 main.go:141] libmachine: (ha-055395)     
	I0826 11:03:10.389861  117024 main.go:141] libmachine: (ha-055395)   </devices>
	I0826 11:03:10.389877  117024 main.go:141] libmachine: (ha-055395) </domain>
	I0826 11:03:10.389893  117024 main.go:141] libmachine: (ha-055395) 
	I0826 11:03:10.394426  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:d8:50:59 in network default
	I0826 11:03:10.395164  117024 main.go:141] libmachine: (ha-055395) Ensuring networks are active...
	I0826 11:03:10.395182  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:10.396007  117024 main.go:141] libmachine: (ha-055395) Ensuring network default is active
	I0826 11:03:10.396336  117024 main.go:141] libmachine: (ha-055395) Ensuring network mk-ha-055395 is active
	I0826 11:03:10.397011  117024 main.go:141] libmachine: (ha-055395) Getting domain xml...
	I0826 11:03:10.397964  117024 main.go:141] libmachine: (ha-055395) Creating domain...
	I0826 11:03:11.608496  117024 main.go:141] libmachine: (ha-055395) Waiting to get IP...
	I0826 11:03:11.609319  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:11.609774  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:11.609804  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:11.609742  117048 retry.go:31] will retry after 224.423543ms: waiting for machine to come up
	I0826 11:03:11.836297  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:11.836820  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:11.836848  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:11.836779  117048 retry.go:31] will retry after 265.180359ms: waiting for machine to come up
	I0826 11:03:12.103409  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:12.103948  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:12.104023  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:12.103928  117048 retry.go:31] will retry after 370.79504ms: waiting for machine to come up
	I0826 11:03:12.476765  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:12.477246  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:12.477275  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:12.477191  117048 retry.go:31] will retry after 384.306618ms: waiting for machine to come up
	I0826 11:03:12.862866  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:12.863312  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:12.863344  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:12.863261  117048 retry.go:31] will retry after 740.562218ms: waiting for machine to come up
	I0826 11:03:13.605198  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:13.605687  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:13.605716  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:13.605650  117048 retry.go:31] will retry after 788.816503ms: waiting for machine to come up
	I0826 11:03:14.395780  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:14.396420  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:14.396446  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:14.396366  117048 retry.go:31] will retry after 741.467845ms: waiting for machine to come up
	I0826 11:03:15.139957  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:15.140381  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:15.140402  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:15.140337  117048 retry.go:31] will retry after 1.206059591s: waiting for machine to come up
	I0826 11:03:16.347725  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:16.348134  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:16.348164  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:16.348092  117048 retry.go:31] will retry after 1.231399953s: waiting for machine to come up
	I0826 11:03:17.581476  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:17.582043  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:17.582063  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:17.581997  117048 retry.go:31] will retry after 1.657218554s: waiting for machine to come up
	I0826 11:03:19.240853  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:19.241329  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:19.241363  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:19.241273  117048 retry.go:31] will retry after 1.846849017s: waiting for machine to come up
	I0826 11:03:21.089350  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:21.089818  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:21.089849  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:21.089754  117048 retry.go:31] will retry after 2.497649926s: waiting for machine to come up
	I0826 11:03:23.590666  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:23.591127  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:23.591163  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:23.591086  117048 retry.go:31] will retry after 4.092248941s: waiting for machine to come up
	I0826 11:03:27.686813  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:27.687335  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:27.687358  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:27.687276  117048 retry.go:31] will retry after 5.278012607s: waiting for machine to come up
	I0826 11:03:32.968801  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:32.969342  117024 main.go:141] libmachine: (ha-055395) Found IP for machine: 192.168.39.150
	I0826 11:03:32.969360  117024 main.go:141] libmachine: (ha-055395) Reserving static IP address...
	I0826 11:03:32.969372  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has current primary IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:32.969826  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find host DHCP lease matching {name: "ha-055395", mac: "52:54:00:91:82:8b", ip: "192.168.39.150"} in network mk-ha-055395
	I0826 11:03:33.052147  117024 main.go:141] libmachine: (ha-055395) DBG | Getting to WaitForSSH function...
	I0826 11:03:33.052237  117024 main.go:141] libmachine: (ha-055395) Reserved static IP address: 192.168.39.150
	I0826 11:03:33.052289  117024 main.go:141] libmachine: (ha-055395) Waiting for SSH to be available...
	I0826 11:03:33.056078  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.056568  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.056592  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.056711  117024 main.go:141] libmachine: (ha-055395) DBG | Using SSH client type: external
	I0826 11:03:33.056737  117024 main.go:141] libmachine: (ha-055395) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa (-rw-------)
	I0826 11:03:33.056766  117024 main.go:141] libmachine: (ha-055395) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:03:33.056775  117024 main.go:141] libmachine: (ha-055395) DBG | About to run SSH command:
	I0826 11:03:33.056786  117024 main.go:141] libmachine: (ha-055395) DBG | exit 0
	I0826 11:03:33.178938  117024 main.go:141] libmachine: (ha-055395) DBG | SSH cmd err, output: <nil>: 
	I0826 11:03:33.179239  117024 main.go:141] libmachine: (ha-055395) KVM machine creation complete!
	I0826 11:03:33.179607  117024 main.go:141] libmachine: (ha-055395) Calling .GetConfigRaw
	I0826 11:03:33.180172  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:33.180402  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:33.180592  117024 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 11:03:33.180608  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:03:33.181945  117024 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 11:03:33.181965  117024 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 11:03:33.181974  117024 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 11:03:33.181982  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.184830  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.185291  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.185326  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.185481  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.185692  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.185863  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.185989  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.186127  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.186361  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.186376  117024 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 11:03:33.286368  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:03:33.286397  117024 main.go:141] libmachine: Detecting the provisioner...
	I0826 11:03:33.286407  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.289364  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.289724  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.289754  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.289904  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.290096  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.290272  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.290395  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.290577  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.290750  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.290761  117024 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 11:03:33.391613  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 11:03:33.391685  117024 main.go:141] libmachine: found compatible host: buildroot
	I0826 11:03:33.391692  117024 main.go:141] libmachine: Provisioning with buildroot...
	I0826 11:03:33.391705  117024 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:03:33.392038  117024 buildroot.go:166] provisioning hostname "ha-055395"
	I0826 11:03:33.392073  117024 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:03:33.392344  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.395408  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.395727  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.395751  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.395938  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.396205  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.396421  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.396636  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.396831  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.397014  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.397025  117024 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-055395 && echo "ha-055395" | sudo tee /etc/hostname
	I0826 11:03:33.513672  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395
	
	I0826 11:03:33.513704  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.516623  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.516993  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.517032  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.517254  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.517472  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.517643  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.517818  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.518028  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.518217  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.518239  117024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-055395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-055395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-055395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:03:33.627944  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:03:33.627979  117024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:03:33.628039  117024 buildroot.go:174] setting up certificates
	I0826 11:03:33.628057  117024 provision.go:84] configureAuth start
	I0826 11:03:33.628073  117024 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:03:33.628380  117024 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:03:33.631377  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.631748  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.631772  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.631927  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.634204  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.634603  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.634631  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.634783  117024 provision.go:143] copyHostCerts
	I0826 11:03:33.634817  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:03:33.634872  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:03:33.634898  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:03:33.634985  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:03:33.635112  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:03:33.635142  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:03:33.635152  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:03:33.635193  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:03:33.635254  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:03:33.635277  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:03:33.635286  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:03:33.635320  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:03:33.635390  117024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.ha-055395 san=[127.0.0.1 192.168.39.150 ha-055395 localhost minikube]
	I0826 11:03:33.739702  117024 provision.go:177] copyRemoteCerts
	I0826 11:03:33.739767  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:03:33.739792  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.742758  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.743086  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.743130  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.743325  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.743520  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.743664  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.743807  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:33.824832  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:03:33.824939  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:03:33.849097  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:03:33.849187  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0826 11:03:33.871798  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:03:33.871885  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:03:33.894893  117024 provision.go:87] duration metric: took 266.81811ms to configureAuth
	I0826 11:03:33.894926  117024 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:03:33.895099  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:03:33.895174  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.898313  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.898706  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.898737  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.898965  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.899176  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.899351  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.899494  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.899668  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.899887  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.899903  117024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:03:34.153675  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:03:34.153708  117024 main.go:141] libmachine: Checking connection to Docker...
	I0826 11:03:34.153716  117024 main.go:141] libmachine: (ha-055395) Calling .GetURL
	I0826 11:03:34.155133  117024 main.go:141] libmachine: (ha-055395) DBG | Using libvirt version 6000000
	I0826 11:03:34.157382  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.157739  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.157761  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.157981  117024 main.go:141] libmachine: Docker is up and running!
	I0826 11:03:34.157999  117024 main.go:141] libmachine: Reticulating splines...
	I0826 11:03:34.158007  117024 client.go:171] duration metric: took 24.227921772s to LocalClient.Create
	I0826 11:03:34.158033  117024 start.go:167] duration metric: took 24.228015034s to libmachine.API.Create "ha-055395"
	I0826 11:03:34.158045  117024 start.go:293] postStartSetup for "ha-055395" (driver="kvm2")
	I0826 11:03:34.158060  117024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:03:34.158083  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.158362  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:03:34.158390  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:34.160846  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.161147  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.161172  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.161356  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:34.161539  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.161694  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:34.161890  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:34.240762  117024 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:03:34.244793  117024 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:03:34.244821  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:03:34.244888  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:03:34.244962  117024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:03:34.244972  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:03:34.245068  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:03:34.254397  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:03:34.282025  117024 start.go:296] duration metric: took 123.960061ms for postStartSetup
	I0826 11:03:34.282091  117024 main.go:141] libmachine: (ha-055395) Calling .GetConfigRaw
	I0826 11:03:34.282754  117024 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:03:34.286054  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.286485  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.286509  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.286858  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:03:34.287156  117024 start.go:128] duration metric: took 24.376564256s to createHost
	I0826 11:03:34.287188  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:34.289487  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.289901  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.289925  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.290240  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:34.290470  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.290605  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.290857  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:34.291072  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:34.291256  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:34.291273  117024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:03:34.399785  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670214.379153941
	
	I0826 11:03:34.399814  117024 fix.go:216] guest clock: 1724670214.379153941
	I0826 11:03:34.399826  117024 fix.go:229] Guest: 2024-08-26 11:03:34.379153941 +0000 UTC Remote: 2024-08-26 11:03:34.287172419 +0000 UTC m=+24.490698333 (delta=91.981522ms)
	I0826 11:03:34.399860  117024 fix.go:200] guest clock delta is within tolerance: 91.981522ms
	I0826 11:03:34.399866  117024 start.go:83] releasing machines lock for "ha-055395", held for 24.489372546s
	I0826 11:03:34.399890  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.400237  117024 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:03:34.403050  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.403499  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.403521  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.403654  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.404229  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.404430  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.404511  117024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:03:34.404557  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:34.404690  117024 ssh_runner.go:195] Run: cat /version.json
	I0826 11:03:34.404716  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:34.407489  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.407653  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.407867  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.407903  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.407936  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.407952  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.408069  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:34.408299  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:34.408332  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.408558  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.408559  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:34.408794  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:34.408775  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:34.408963  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:34.527543  117024 ssh_runner.go:195] Run: systemctl --version
	I0826 11:03:34.533890  117024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:03:34.692657  117024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:03:34.698640  117024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:03:34.698717  117024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:03:34.715052  117024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:03:34.715086  117024 start.go:495] detecting cgroup driver to use...
	I0826 11:03:34.715157  117024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:03:34.730592  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:03:34.744714  117024 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:03:34.744793  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:03:34.758226  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:03:34.771923  117024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:03:34.887947  117024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:03:35.035349  117024 docker.go:233] disabling docker service ...
	I0826 11:03:35.035417  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:03:35.049879  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:03:35.062408  117024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:03:35.193889  117024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:03:35.329732  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:03:35.342913  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:03:35.360253  117024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:03:35.360322  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.370813  117024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:03:35.370900  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.381074  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.392635  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.403367  117024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:03:35.414733  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.426584  117024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.443776  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.453992  117024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:03:35.463419  117024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:03:35.463497  117024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:03:35.477269  117024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:03:35.487183  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:03:35.609378  117024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:03:35.740451  117024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:03:35.740543  117024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:03:35.745507  117024 start.go:563] Will wait 60s for crictl version
	I0826 11:03:35.745610  117024 ssh_runner.go:195] Run: which crictl
	I0826 11:03:35.749251  117024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:03:35.787232  117024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:03:35.787327  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:03:35.815315  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:03:35.844399  117024 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:03:35.846146  117024 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:03:35.848989  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:35.849355  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:35.849383  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:35.849674  117024 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:03:35.853588  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:03:35.865877  117024 kubeadm.go:883] updating cluster {Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:03:35.865989  117024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:03:35.866043  117024 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:03:35.897173  117024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 11:03:35.897253  117024 ssh_runner.go:195] Run: which lz4
	I0826 11:03:35.901041  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0826 11:03:35.901171  117024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 11:03:35.905185  117024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 11:03:35.905229  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 11:03:37.198330  117024 crio.go:462] duration metric: took 1.297194802s to copy over tarball
	I0826 11:03:37.198412  117024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 11:03:39.276677  117024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.078236047s)
	I0826 11:03:39.276711  117024 crio.go:469] duration metric: took 2.078346989s to extract the tarball
	I0826 11:03:39.276722  117024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 11:03:39.313763  117024 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:03:39.359702  117024 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:03:39.359732  117024 cache_images.go:84] Images are preloaded, skipping loading
	I0826 11:03:39.359745  117024 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.31.0 crio true true} ...
	I0826 11:03:39.359904  117024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-055395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:03:39.359999  117024 ssh_runner.go:195] Run: crio config
	I0826 11:03:39.409301  117024 cni.go:84] Creating CNI manager for ""
	I0826 11:03:39.409333  117024 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0826 11:03:39.409347  117024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:03:39.409380  117024 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-055395 NodeName:ha-055395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 11:03:39.409557  117024 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-055395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:03:39.409585  117024 kube-vip.go:115] generating kube-vip config ...
	I0826 11:03:39.409641  117024 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0826 11:03:39.427739  117024 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0826 11:03:39.427853  117024 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0826 11:03:39.427919  117024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:03:39.437860  117024 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:03:39.437948  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0826 11:03:39.447555  117024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0826 11:03:39.463924  117024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:03:39.480746  117024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0826 11:03:39.497403  117024 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0826 11:03:39.514189  117024 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0826 11:03:39.517948  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:03:39.529999  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:03:39.648543  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:03:39.665059  117024 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395 for IP: 192.168.39.150
	I0826 11:03:39.665089  117024 certs.go:194] generating shared ca certs ...
	I0826 11:03:39.665108  117024 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.665299  117024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:03:39.665356  117024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:03:39.665369  117024 certs.go:256] generating profile certs ...
	I0826 11:03:39.665445  117024 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key
	I0826 11:03:39.665478  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt with IP's: []
	I0826 11:03:39.853443  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt ...
	I0826 11:03:39.853479  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt: {Name:mkc397b1a38dbc1647b20007cc4550ac4c76cb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.853664  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key ...
	I0826 11:03:39.853675  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key: {Name:mkef63b3342f1a90a16a5cf40496e63ab5aa7002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.853752  117024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.f7c186aa
	I0826 11:03:39.853766  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.f7c186aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.254]
	I0826 11:03:39.961173  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.f7c186aa ...
	I0826 11:03:39.961217  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.f7c186aa: {Name:mk6de53fc57d5a4578e426a8fda2cbc0e119c40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.961393  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.f7c186aa ...
	I0826 11:03:39.961408  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.f7c186aa: {Name:mkf6d833d9635569571577746e5e1109a1cf347f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.961476  117024 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.f7c186aa -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt
	I0826 11:03:39.961607  117024 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.f7c186aa -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key
	I0826 11:03:39.961667  117024 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key
	I0826 11:03:39.961684  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt with IP's: []
	I0826 11:03:40.078200  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt ...
	I0826 11:03:40.078240  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt: {Name:mk33ca7bddb8f75ee337ba852e63f18daa5f2c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:40.078430  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key ...
	I0826 11:03:40.078443  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key: {Name:mk1bf6df6decfe2222d191672ac8677c0385a9fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:40.078521  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:03:40.078547  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:03:40.078564  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:03:40.078578  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:03:40.078592  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:03:40.078607  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:03:40.078620  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:03:40.078632  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:03:40.078692  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:03:40.078731  117024 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:03:40.078745  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:03:40.078769  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:03:40.078796  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:03:40.078823  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:03:40.078885  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:03:40.078914  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:03:40.078934  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:03:40.078949  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:03:40.079588  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:03:40.104793  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:03:40.127468  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:03:40.150072  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:03:40.172631  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 11:03:40.195723  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 11:03:40.218225  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:03:40.240516  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:03:40.262956  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:03:40.285934  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:03:40.308626  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:03:40.331138  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:03:40.347251  117024 ssh_runner.go:195] Run: openssl version
	I0826 11:03:40.352639  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:03:40.362798  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:03:40.366922  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:03:40.366975  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:03:40.372443  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:03:40.383218  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:03:40.393973  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:03:40.398597  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:03:40.398679  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:03:40.404284  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:03:40.415301  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:03:40.429673  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:03:40.434681  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:03:40.434762  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:03:40.441456  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:03:40.454176  117024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:03:40.461169  117024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 11:03:40.461255  117024 kubeadm.go:392] StartCluster: {Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:03:40.461356  117024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:03:40.461426  117024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:03:40.516064  117024 cri.go:89] found id: ""
	I0826 11:03:40.516165  117024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 11:03:40.526222  117024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 11:03:40.535869  117024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 11:03:40.545763  117024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 11:03:40.545788  117024 kubeadm.go:157] found existing configuration files:
	
	I0826 11:03:40.545844  117024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 11:03:40.555213  117024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 11:03:40.555301  117024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 11:03:40.565112  117024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 11:03:40.574562  117024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 11:03:40.574662  117024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 11:03:40.584245  117024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 11:03:40.593223  117024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 11:03:40.593296  117024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 11:03:40.602877  117024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 11:03:40.612049  117024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 11:03:40.612126  117024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 11:03:40.621273  117024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 11:03:40.725441  117024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 11:03:40.725600  117024 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 11:03:40.816670  117024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 11:03:40.816813  117024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 11:03:40.816995  117024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 11:03:40.826481  117024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 11:03:40.846502  117024 out.go:235]   - Generating certificates and keys ...
	I0826 11:03:40.846634  117024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 11:03:40.846702  117024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 11:03:41.055404  117024 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0826 11:03:41.169930  117024 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0826 11:03:41.344289  117024 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0826 11:03:41.612958  117024 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0826 11:03:41.777675  117024 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0826 11:03:41.777838  117024 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-055395 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0826 11:03:42.045956  117024 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0826 11:03:42.046165  117024 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-055395 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0826 11:03:42.219563  117024 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0826 11:03:42.366975  117024 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0826 11:03:42.434860  117024 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0826 11:03:42.434957  117024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 11:03:42.700092  117024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 11:03:42.881338  117024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 11:03:43.096762  117024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 11:03:43.319011  117024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 11:03:43.375586  117024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 11:03:43.376129  117024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 11:03:43.380586  117024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 11:03:43.458697  117024 out.go:235]   - Booting up control plane ...
	I0826 11:03:43.458888  117024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 11:03:43.459052  117024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 11:03:43.459158  117024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 11:03:43.459309  117024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 11:03:43.459455  117024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 11:03:43.459521  117024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 11:03:43.551735  117024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 11:03:43.551858  117024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 11:03:44.552521  117024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001356152s
	I0826 11:03:44.552618  117024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 11:03:50.224303  117024 kubeadm.go:310] [api-check] The API server is healthy after 5.67445267s
	I0826 11:03:50.237911  117024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 11:03:50.263772  117024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 11:03:50.807085  117024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 11:03:50.807295  117024 kubeadm.go:310] [mark-control-plane] Marking the node ha-055395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 11:03:50.820746  117024 kubeadm.go:310] [bootstrap-token] Using token: pkf7iv.zgxj01v83wryjd35
	I0826 11:03:50.822481  117024 out.go:235]   - Configuring RBAC rules ...
	I0826 11:03:50.822621  117024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 11:03:50.832725  117024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 11:03:50.841787  117024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 11:03:50.846377  117024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 11:03:50.850960  117024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 11:03:50.855809  117024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 11:03:50.872409  117024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 11:03:51.150143  117024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 11:03:51.632447  117024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 11:03:51.632473  117024 kubeadm.go:310] 
	I0826 11:03:51.632527  117024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 11:03:51.632531  117024 kubeadm.go:310] 
	I0826 11:03:51.632656  117024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 11:03:51.632668  117024 kubeadm.go:310] 
	I0826 11:03:51.632695  117024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 11:03:51.632809  117024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 11:03:51.632894  117024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 11:03:51.632902  117024 kubeadm.go:310] 
	I0826 11:03:51.632943  117024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 11:03:51.632949  117024 kubeadm.go:310] 
	I0826 11:03:51.633001  117024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 11:03:51.633024  117024 kubeadm.go:310] 
	I0826 11:03:51.633067  117024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 11:03:51.633154  117024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 11:03:51.633256  117024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 11:03:51.633266  117024 kubeadm.go:310] 
	I0826 11:03:51.633373  117024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 11:03:51.633484  117024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 11:03:51.633495  117024 kubeadm.go:310] 
	I0826 11:03:51.633602  117024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pkf7iv.zgxj01v83wryjd35 \
	I0826 11:03:51.633728  117024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 11:03:51.633768  117024 kubeadm.go:310] 	--control-plane 
	I0826 11:03:51.633775  117024 kubeadm.go:310] 
	I0826 11:03:51.633844  117024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 11:03:51.633850  117024 kubeadm.go:310] 
	I0826 11:03:51.633917  117024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pkf7iv.zgxj01v83wryjd35 \
	I0826 11:03:51.634004  117024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 11:03:51.634819  117024 kubeadm.go:310] W0826 11:03:40.707678     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 11:03:51.635147  117024 kubeadm.go:310] W0826 11:03:40.708593     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 11:03:51.635289  117024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 11:03:51.635334  117024 cni.go:84] Creating CNI manager for ""
	I0826 11:03:51.635349  117024 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0826 11:03:51.637400  117024 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0826 11:03:51.639006  117024 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0826 11:03:51.645091  117024 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0826 11:03:51.645116  117024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0826 11:03:51.666922  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0826 11:03:52.066335  117024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 11:03:52.066465  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-055395 minikube.k8s.io/updated_at=2024_08_26T11_03_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=ha-055395 minikube.k8s.io/primary=true
	I0826 11:03:52.066488  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:52.189227  117024 ops.go:34] apiserver oom_adj: -16
	I0826 11:03:52.238042  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:52.738872  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:53.238660  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:53.738325  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:54.238982  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:54.738215  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:55.239022  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:55.738912  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:55.832808  117024 kubeadm.go:1113] duration metric: took 3.766437401s to wait for elevateKubeSystemPrivileges
	I0826 11:03:55.832874  117024 kubeadm.go:394] duration metric: took 15.371615091s to StartCluster
	I0826 11:03:55.832909  117024 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:55.832991  117024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:03:55.833735  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:55.833973  117024 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:03:55.833987  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0826 11:03:55.834002  117024 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 11:03:55.834041  117024 addons.go:69] Setting storage-provisioner=true in profile "ha-055395"
	I0826 11:03:55.834065  117024 addons.go:234] Setting addon storage-provisioner=true in "ha-055395"
	I0826 11:03:55.834088  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:03:55.833995  117024 start.go:241] waiting for startup goroutines ...
	I0826 11:03:55.834109  117024 addons.go:69] Setting default-storageclass=true in profile "ha-055395"
	I0826 11:03:55.834146  117024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-055395"
	I0826 11:03:55.834148  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:03:55.834417  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.834465  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.834526  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.834557  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.850645  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0826 11:03:55.850816  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0826 11:03:55.851241  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.851370  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.851833  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.851857  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.851904  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.851929  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.852256  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.852317  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.852426  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:03:55.852909  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.852938  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.854651  117024 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:03:55.855019  117024 kapi.go:59] client config for ha-055395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key", CAFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0826 11:03:55.855525  117024 cert_rotation.go:140] Starting client certificate rotation controller
	I0826 11:03:55.855926  117024 addons.go:234] Setting addon default-storageclass=true in "ha-055395"
	I0826 11:03:55.855977  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:03:55.856371  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.856407  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.869749  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36025
	I0826 11:03:55.870232  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.870693  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.870713  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.871106  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.871333  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:03:55.871710  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35217
	I0826 11:03:55.872155  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.872668  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.872690  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.873028  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:55.873046  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.873632  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.873689  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.875223  117024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:03:55.876601  117024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 11:03:55.876623  117024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 11:03:55.876643  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:55.880087  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:55.880531  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:55.880555  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:55.880781  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:55.880990  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:55.881154  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:55.881308  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:55.893589  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33937
	I0826 11:03:55.894113  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.894624  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.894651  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.895062  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.895257  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:03:55.896973  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:55.897210  117024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 11:03:55.897224  117024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 11:03:55.897240  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:55.900744  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:55.901224  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:55.901251  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:55.901403  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:55.901602  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:55.901764  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:55.901982  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:56.002634  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0826 11:03:56.043456  117024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 11:03:56.066165  117024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 11:03:56.601633  117024 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0826 11:03:56.858426  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.858454  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.858534  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.858558  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.858910  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.858922  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.858924  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.858925  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.858944  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.858954  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.858962  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.858933  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.859022  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.859031  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.859161  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.859222  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.859223  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.859302  117024 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0826 11:03:56.859329  117024 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0826 11:03:56.859433  117024 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0826 11:03:56.859444  117024 round_trippers.go:469] Request Headers:
	I0826 11:03:56.859454  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:03:56.859463  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:03:56.859479  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.859435  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.859541  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.875875  117024 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0826 11:03:56.876521  117024 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0826 11:03:56.876537  117024 round_trippers.go:469] Request Headers:
	I0826 11:03:56.876544  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:03:56.876549  117024 round_trippers.go:473]     Content-Type: application/json
	I0826 11:03:56.876553  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:03:56.881215  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:03:56.881431  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.881450  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.881766  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.881785  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.881785  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.883642  117024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0826 11:03:56.884883  117024 addons.go:510] duration metric: took 1.050875595s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0826 11:03:56.884934  117024 start.go:246] waiting for cluster config update ...
	I0826 11:03:56.884951  117024 start.go:255] writing updated cluster config ...
	I0826 11:03:56.886530  117024 out.go:201] 
	I0826 11:03:56.887959  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:03:56.888029  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:03:56.889488  117024 out.go:177] * Starting "ha-055395-m02" control-plane node in "ha-055395" cluster
	I0826 11:03:56.890519  117024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:03:56.890546  117024 cache.go:56] Caching tarball of preloaded images
	I0826 11:03:56.890653  117024 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:03:56.890667  117024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:03:56.890733  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:03:56.890995  117024 start.go:360] acquireMachinesLock for ha-055395-m02: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:03:56.891059  117024 start.go:364] duration metric: took 39.036µs to acquireMachinesLock for "ha-055395-m02"
	I0826 11:03:56.891085  117024 start.go:93] Provisioning new machine with config: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:03:56.891180  117024 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0826 11:03:56.892849  117024 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 11:03:56.892928  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:56.892956  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:56.908421  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0826 11:03:56.908931  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:56.909451  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:56.909474  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:56.909912  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:56.910102  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetMachineName
	I0826 11:03:56.910242  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:03:56.910395  117024 start.go:159] libmachine.API.Create for "ha-055395" (driver="kvm2")
	I0826 11:03:56.910417  117024 client.go:168] LocalClient.Create starting
	I0826 11:03:56.910446  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 11:03:56.910483  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:03:56.910498  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:03:56.910556  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 11:03:56.910576  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:03:56.910588  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:03:56.910604  117024 main.go:141] libmachine: Running pre-create checks...
	I0826 11:03:56.910612  117024 main.go:141] libmachine: (ha-055395-m02) Calling .PreCreateCheck
	I0826 11:03:56.910729  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetConfigRaw
	I0826 11:03:56.911153  117024 main.go:141] libmachine: Creating machine...
	I0826 11:03:56.911169  117024 main.go:141] libmachine: (ha-055395-m02) Calling .Create
	I0826 11:03:56.911293  117024 main.go:141] libmachine: (ha-055395-m02) Creating KVM machine...
	I0826 11:03:56.912625  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found existing default KVM network
	I0826 11:03:56.912797  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found existing private KVM network mk-ha-055395
	I0826 11:03:56.912931  117024 main.go:141] libmachine: (ha-055395-m02) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02 ...
	I0826 11:03:56.912950  117024 main.go:141] libmachine: (ha-055395-m02) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 11:03:56.913032  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:56.912933  117411 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:56.913133  117024 main.go:141] libmachine: (ha-055395-m02) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 11:03:57.178677  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:57.178502  117411 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa...
	I0826 11:03:57.355999  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:57.355865  117411 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/ha-055395-m02.rawdisk...
	I0826 11:03:57.356029  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Writing magic tar header
	I0826 11:03:57.356040  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Writing SSH key tar header
	I0826 11:03:57.356157  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:57.356040  117411 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02 ...
	I0826 11:03:57.356241  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02 (perms=drwx------)
	I0826 11:03:57.356257  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02
	I0826 11:03:57.356264  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 11:03:57.356271  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 11:03:57.356283  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:57.356295  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 11:03:57.356308  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 11:03:57.356319  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins
	I0826 11:03:57.356334  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home
	I0826 11:03:57.356349  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 11:03:57.356357  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Skipping /home - not owner
	I0826 11:03:57.356369  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 11:03:57.356377  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 11:03:57.356384  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 11:03:57.356391  117024 main.go:141] libmachine: (ha-055395-m02) Creating domain...
	I0826 11:03:57.357519  117024 main.go:141] libmachine: (ha-055395-m02) define libvirt domain using xml: 
	I0826 11:03:57.357543  117024 main.go:141] libmachine: (ha-055395-m02) <domain type='kvm'>
	I0826 11:03:57.357577  117024 main.go:141] libmachine: (ha-055395-m02)   <name>ha-055395-m02</name>
	I0826 11:03:57.357594  117024 main.go:141] libmachine: (ha-055395-m02)   <memory unit='MiB'>2200</memory>
	I0826 11:03:57.357603  117024 main.go:141] libmachine: (ha-055395-m02)   <vcpu>2</vcpu>
	I0826 11:03:57.357613  117024 main.go:141] libmachine: (ha-055395-m02)   <features>
	I0826 11:03:57.357621  117024 main.go:141] libmachine: (ha-055395-m02)     <acpi/>
	I0826 11:03:57.357636  117024 main.go:141] libmachine: (ha-055395-m02)     <apic/>
	I0826 11:03:57.357655  117024 main.go:141] libmachine: (ha-055395-m02)     <pae/>
	I0826 11:03:57.357664  117024 main.go:141] libmachine: (ha-055395-m02)     
	I0826 11:03:57.357691  117024 main.go:141] libmachine: (ha-055395-m02)   </features>
	I0826 11:03:57.357711  117024 main.go:141] libmachine: (ha-055395-m02)   <cpu mode='host-passthrough'>
	I0826 11:03:57.357723  117024 main.go:141] libmachine: (ha-055395-m02)   
	I0826 11:03:57.357738  117024 main.go:141] libmachine: (ha-055395-m02)   </cpu>
	I0826 11:03:57.357747  117024 main.go:141] libmachine: (ha-055395-m02)   <os>
	I0826 11:03:57.357759  117024 main.go:141] libmachine: (ha-055395-m02)     <type>hvm</type>
	I0826 11:03:57.357772  117024 main.go:141] libmachine: (ha-055395-m02)     <boot dev='cdrom'/>
	I0826 11:03:57.357787  117024 main.go:141] libmachine: (ha-055395-m02)     <boot dev='hd'/>
	I0826 11:03:57.357799  117024 main.go:141] libmachine: (ha-055395-m02)     <bootmenu enable='no'/>
	I0826 11:03:57.357808  117024 main.go:141] libmachine: (ha-055395-m02)   </os>
	I0826 11:03:57.357816  117024 main.go:141] libmachine: (ha-055395-m02)   <devices>
	I0826 11:03:57.357827  117024 main.go:141] libmachine: (ha-055395-m02)     <disk type='file' device='cdrom'>
	I0826 11:03:57.357841  117024 main.go:141] libmachine: (ha-055395-m02)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/boot2docker.iso'/>
	I0826 11:03:57.357851  117024 main.go:141] libmachine: (ha-055395-m02)       <target dev='hdc' bus='scsi'/>
	I0826 11:03:57.357869  117024 main.go:141] libmachine: (ha-055395-m02)       <readonly/>
	I0826 11:03:57.357878  117024 main.go:141] libmachine: (ha-055395-m02)     </disk>
	I0826 11:03:57.357981  117024 main.go:141] libmachine: (ha-055395-m02)     <disk type='file' device='disk'>
	I0826 11:03:57.358026  117024 main.go:141] libmachine: (ha-055395-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 11:03:57.358046  117024 main.go:141] libmachine: (ha-055395-m02)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/ha-055395-m02.rawdisk'/>
	I0826 11:03:57.358060  117024 main.go:141] libmachine: (ha-055395-m02)       <target dev='hda' bus='virtio'/>
	I0826 11:03:57.358072  117024 main.go:141] libmachine: (ha-055395-m02)     </disk>
	I0826 11:03:57.358085  117024 main.go:141] libmachine: (ha-055395-m02)     <interface type='network'>
	I0826 11:03:57.358128  117024 main.go:141] libmachine: (ha-055395-m02)       <source network='mk-ha-055395'/>
	I0826 11:03:57.358155  117024 main.go:141] libmachine: (ha-055395-m02)       <model type='virtio'/>
	I0826 11:03:57.358165  117024 main.go:141] libmachine: (ha-055395-m02)     </interface>
	I0826 11:03:57.358173  117024 main.go:141] libmachine: (ha-055395-m02)     <interface type='network'>
	I0826 11:03:57.358180  117024 main.go:141] libmachine: (ha-055395-m02)       <source network='default'/>
	I0826 11:03:57.358185  117024 main.go:141] libmachine: (ha-055395-m02)       <model type='virtio'/>
	I0826 11:03:57.358195  117024 main.go:141] libmachine: (ha-055395-m02)     </interface>
	I0826 11:03:57.358208  117024 main.go:141] libmachine: (ha-055395-m02)     <serial type='pty'>
	I0826 11:03:57.358218  117024 main.go:141] libmachine: (ha-055395-m02)       <target port='0'/>
	I0826 11:03:57.358224  117024 main.go:141] libmachine: (ha-055395-m02)     </serial>
	I0826 11:03:57.358234  117024 main.go:141] libmachine: (ha-055395-m02)     <console type='pty'>
	I0826 11:03:57.358245  117024 main.go:141] libmachine: (ha-055395-m02)       <target type='serial' port='0'/>
	I0826 11:03:57.358260  117024 main.go:141] libmachine: (ha-055395-m02)     </console>
	I0826 11:03:57.358272  117024 main.go:141] libmachine: (ha-055395-m02)     <rng model='virtio'>
	I0826 11:03:57.358310  117024 main.go:141] libmachine: (ha-055395-m02)       <backend model='random'>/dev/random</backend>
	I0826 11:03:57.358334  117024 main.go:141] libmachine: (ha-055395-m02)     </rng>
	I0826 11:03:57.358346  117024 main.go:141] libmachine: (ha-055395-m02)     
	I0826 11:03:57.358361  117024 main.go:141] libmachine: (ha-055395-m02)     
	I0826 11:03:57.358372  117024 main.go:141] libmachine: (ha-055395-m02)   </devices>
	I0826 11:03:57.358381  117024 main.go:141] libmachine: (ha-055395-m02) </domain>
	I0826 11:03:57.358395  117024 main.go:141] libmachine: (ha-055395-m02) 
	I0826 11:03:57.365313  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:e2:d8:6e in network default
	I0826 11:03:57.365914  117024 main.go:141] libmachine: (ha-055395-m02) Ensuring networks are active...
	I0826 11:03:57.365942  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:57.366688  117024 main.go:141] libmachine: (ha-055395-m02) Ensuring network default is active
	I0826 11:03:57.367068  117024 main.go:141] libmachine: (ha-055395-m02) Ensuring network mk-ha-055395 is active
	I0826 11:03:57.367494  117024 main.go:141] libmachine: (ha-055395-m02) Getting domain xml...
	I0826 11:03:57.368172  117024 main.go:141] libmachine: (ha-055395-m02) Creating domain...
	I0826 11:03:58.586476  117024 main.go:141] libmachine: (ha-055395-m02) Waiting to get IP...
	I0826 11:03:58.587260  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:58.587652  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:58.587674  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:58.587630  117411 retry.go:31] will retry after 235.776027ms: waiting for machine to come up
	I0826 11:03:58.825143  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:58.825716  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:58.825747  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:58.825675  117411 retry.go:31] will retry after 269.486383ms: waiting for machine to come up
	I0826 11:03:59.097093  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:59.097562  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:59.097597  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:59.097517  117411 retry.go:31] will retry after 427.352721ms: waiting for machine to come up
	I0826 11:03:59.526343  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:59.526897  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:59.526932  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:59.526871  117411 retry.go:31] will retry after 411.230052ms: waiting for machine to come up
	I0826 11:03:59.939173  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:59.939687  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:59.939718  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:59.939636  117411 retry.go:31] will retry after 699.606269ms: waiting for machine to come up
	I0826 11:04:00.640504  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:00.641135  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:00.641165  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:00.641073  117411 retry.go:31] will retry after 906.425603ms: waiting for machine to come up
	I0826 11:04:01.549180  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:01.549749  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:01.549835  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:01.549724  117411 retry.go:31] will retry after 1.180965246s: waiting for machine to come up
	I0826 11:04:02.732557  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:02.733074  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:02.733112  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:02.733019  117411 retry.go:31] will retry after 937.830995ms: waiting for machine to come up
	I0826 11:04:03.671965  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:03.672355  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:03.672377  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:03.672311  117411 retry.go:31] will retry after 1.614048809s: waiting for machine to come up
	I0826 11:04:05.289158  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:05.289646  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:05.289671  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:05.289570  117411 retry.go:31] will retry after 1.660352387s: waiting for machine to come up
	I0826 11:04:06.951776  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:06.952237  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:06.952281  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:06.952117  117411 retry.go:31] will retry after 2.116784544s: waiting for machine to come up
	I0826 11:04:09.071540  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:09.072018  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:09.072043  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:09.071942  117411 retry.go:31] will retry after 3.356650421s: waiting for machine to come up
	I0826 11:04:12.429954  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:12.430444  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:12.430474  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:12.430409  117411 retry.go:31] will retry after 3.216911436s: waiting for machine to come up
	I0826 11:04:15.648479  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:15.648901  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:15.648924  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:15.648860  117411 retry.go:31] will retry after 4.040420472s: waiting for machine to come up
	I0826 11:04:19.692722  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.693185  117024 main.go:141] libmachine: (ha-055395-m02) Found IP for machine: 192.168.39.55
	I0826 11:04:19.693210  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has current primary IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.693216  117024 main.go:141] libmachine: (ha-055395-m02) Reserving static IP address...
	I0826 11:04:19.693567  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find host DHCP lease matching {name: "ha-055395-m02", mac: "52:54:00:5f:d6:56", ip: "192.168.39.55"} in network mk-ha-055395
	I0826 11:04:19.781117  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Getting to WaitForSSH function...
	I0826 11:04:19.781148  117024 main.go:141] libmachine: (ha-055395-m02) Reserved static IP address: 192.168.39.55
	I0826 11:04:19.781161  117024 main.go:141] libmachine: (ha-055395-m02) Waiting for SSH to be available...
	I0826 11:04:19.784367  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.784768  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:19.784795  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.784974  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Using SSH client type: external
	I0826 11:04:19.784999  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa (-rw-------)
	I0826 11:04:19.785030  117024 main.go:141] libmachine: (ha-055395-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:04:19.785043  117024 main.go:141] libmachine: (ha-055395-m02) DBG | About to run SSH command:
	I0826 11:04:19.785064  117024 main.go:141] libmachine: (ha-055395-m02) DBG | exit 0
	I0826 11:04:19.915229  117024 main.go:141] libmachine: (ha-055395-m02) DBG | SSH cmd err, output: <nil>: 
	I0826 11:04:19.915559  117024 main.go:141] libmachine: (ha-055395-m02) KVM machine creation complete!
	I0826 11:04:19.915873  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetConfigRaw
	I0826 11:04:19.916417  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:19.916675  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:19.916865  117024 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 11:04:19.916883  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:04:19.918440  117024 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 11:04:19.918459  117024 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 11:04:19.918465  117024 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 11:04:19.918471  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:19.920873  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.921334  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:19.921356  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.921499  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:19.921706  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:19.921870  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:19.922008  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:19.922142  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:19.922384  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:19.922398  117024 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 11:04:20.038102  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:04:20.038125  117024 main.go:141] libmachine: Detecting the provisioner...
	I0826 11:04:20.038136  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.041029  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.041452  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.041479  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.041658  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.041929  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.042119  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.042346  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.042520  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:20.042736  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:20.042754  117024 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 11:04:20.155301  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 11:04:20.155391  117024 main.go:141] libmachine: found compatible host: buildroot
	I0826 11:04:20.155404  117024 main.go:141] libmachine: Provisioning with buildroot...
	I0826 11:04:20.155412  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetMachineName
	I0826 11:04:20.155683  117024 buildroot.go:166] provisioning hostname "ha-055395-m02"
	I0826 11:04:20.155714  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetMachineName
	I0826 11:04:20.155950  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.158677  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.159089  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.159115  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.159260  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.159461  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.159648  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.159832  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.160036  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:20.160211  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:20.160224  117024 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-055395-m02 && echo "ha-055395-m02" | sudo tee /etc/hostname
	I0826 11:04:20.288938  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395-m02
	
	I0826 11:04:20.288967  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.291507  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.291844  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.291875  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.292018  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.292221  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.292406  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.292583  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.292738  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:20.292903  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:20.292922  117024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-055395-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-055395-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-055395-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:04:20.415598  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:04:20.415634  117024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:04:20.415668  117024 buildroot.go:174] setting up certificates
	I0826 11:04:20.415682  117024 provision.go:84] configureAuth start
	I0826 11:04:20.415697  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetMachineName
	I0826 11:04:20.416038  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:04:20.418919  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.419439  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.419471  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.419648  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.422258  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.422678  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.422708  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.422942  117024 provision.go:143] copyHostCerts
	I0826 11:04:20.422981  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:04:20.423021  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:04:20.423030  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:04:20.423098  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:04:20.423170  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:04:20.423187  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:04:20.423194  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:04:20.423216  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:04:20.423312  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:04:20.423332  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:04:20.423339  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:04:20.423364  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:04:20.423415  117024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.ha-055395-m02 san=[127.0.0.1 192.168.39.55 ha-055395-m02 localhost minikube]
	I0826 11:04:20.503018  117024 provision.go:177] copyRemoteCerts
	I0826 11:04:20.503077  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:04:20.503104  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.505923  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.506307  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.506345  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.506622  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.506925  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.507112  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.507286  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:04:20.592967  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:04:20.593046  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:04:20.619679  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:04:20.619755  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0826 11:04:20.644651  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:04:20.644725  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:04:20.667901  117024 provision.go:87] duration metric: took 252.203794ms to configureAuth
	I0826 11:04:20.667931  117024 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:04:20.668106  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:04:20.668216  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.670977  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.671395  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.671433  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.671752  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.672005  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.672211  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.672415  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.672608  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:20.672844  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:20.672878  117024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:04:20.948815  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:04:20.948861  117024 main.go:141] libmachine: Checking connection to Docker...
	I0826 11:04:20.948873  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetURL
	I0826 11:04:20.950251  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Using libvirt version 6000000
	I0826 11:04:20.952436  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.952776  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.952807  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.952997  117024 main.go:141] libmachine: Docker is up and running!
	I0826 11:04:20.953021  117024 main.go:141] libmachine: Reticulating splines...
	I0826 11:04:20.953030  117024 client.go:171] duration metric: took 24.042605537s to LocalClient.Create
	I0826 11:04:20.953060  117024 start.go:167] duration metric: took 24.042663921s to libmachine.API.Create "ha-055395"
	I0826 11:04:20.953073  117024 start.go:293] postStartSetup for "ha-055395-m02" (driver="kvm2")
	I0826 11:04:20.953088  117024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:04:20.953113  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:20.953361  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:04:20.953392  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.955636  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.955962  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.955989  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.956118  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.956321  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.956465  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.956602  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:04:21.040754  117024 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:04:21.044756  117024 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:04:21.044795  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:04:21.044880  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:04:21.044975  117024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:04:21.044989  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:04:21.045101  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:04:21.054381  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:04:21.079000  117024 start.go:296] duration metric: took 125.909237ms for postStartSetup
	I0826 11:04:21.079062  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetConfigRaw
	I0826 11:04:21.079683  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:04:21.082204  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.082539  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.082570  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.082859  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:04:21.083097  117024 start.go:128] duration metric: took 24.191904547s to createHost
	I0826 11:04:21.083127  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:21.085311  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.085611  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.085640  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.085787  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:21.086000  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:21.086143  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:21.086286  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:21.086429  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:21.086612  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:21.086626  117024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:04:21.199436  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670261.177401120
	
	I0826 11:04:21.199477  117024 fix.go:216] guest clock: 1724670261.177401120
	I0826 11:04:21.199490  117024 fix.go:229] Guest: 2024-08-26 11:04:21.17740112 +0000 UTC Remote: 2024-08-26 11:04:21.083111953 +0000 UTC m=+71.286637863 (delta=94.289167ms)
	I0826 11:04:21.199519  117024 fix.go:200] guest clock delta is within tolerance: 94.289167ms
	I0826 11:04:21.199528  117024 start.go:83] releasing machines lock for "ha-055395-m02", held for 24.308458499s
	I0826 11:04:21.199551  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:21.199905  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:04:21.202606  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.202979  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.203011  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.205551  117024 out.go:177] * Found network options:
	I0826 11:04:21.207306  117024 out.go:177]   - NO_PROXY=192.168.39.150
	W0826 11:04:21.208816  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0826 11:04:21.208855  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:21.209465  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:21.209714  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:21.209822  117024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:04:21.209879  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	W0826 11:04:21.209975  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0826 11:04:21.210049  117024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:04:21.210069  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:21.212915  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.213120  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.213267  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.213306  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.213503  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:21.213735  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:21.213736  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.213767  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.213903  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:21.214034  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:21.214110  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:04:21.214194  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:21.214320  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:21.214462  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:04:21.450828  117024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:04:21.457231  117024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:04:21.457318  117024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:04:21.472675  117024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:04:21.472709  117024 start.go:495] detecting cgroup driver to use...
	I0826 11:04:21.472794  117024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:04:21.488170  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:04:21.501938  117024 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:04:21.502010  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:04:21.515554  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:04:21.536633  117024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:04:21.651112  117024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:04:21.814641  117024 docker.go:233] disabling docker service ...
	I0826 11:04:21.814737  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:04:21.829435  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:04:21.843451  117024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:04:21.966209  117024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:04:22.100363  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:04:22.114335  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:04:22.133049  117024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:04:22.133127  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.143659  117024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:04:22.143745  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.154541  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.165107  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.175808  117024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:04:22.186717  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.197109  117024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.214180  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.224402  117024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:04:22.233575  117024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:04:22.233633  117024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:04:22.245348  117024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:04:22.254931  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:04:22.376465  117024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:04:22.511044  117024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:04:22.511137  117024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:04:22.516213  117024 start.go:563] Will wait 60s for crictl version
	I0826 11:04:22.516278  117024 ssh_runner.go:195] Run: which crictl
	I0826 11:04:22.519857  117024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:04:22.558773  117024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:04:22.558878  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:04:22.586918  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:04:22.614172  117024 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:04:22.615863  117024 out.go:177]   - env NO_PROXY=192.168.39.150
	I0826 11:04:22.616968  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:04:22.619594  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:22.619939  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:22.619968  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:22.620182  117024 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:04:22.624219  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:04:22.636424  117024 mustload.go:65] Loading cluster: ha-055395
	I0826 11:04:22.636648  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:04:22.636947  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:04:22.636978  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:04:22.653019  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0826 11:04:22.653445  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:04:22.653979  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:04:22.654003  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:04:22.654293  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:04:22.654451  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:04:22.656162  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:04:22.656466  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:04:22.656494  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:04:22.672944  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0826 11:04:22.673387  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:04:22.673904  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:04:22.673927  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:04:22.674288  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:04:22.674532  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:04:22.674696  117024 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395 for IP: 192.168.39.55
	I0826 11:04:22.674707  117024 certs.go:194] generating shared ca certs ...
	I0826 11:04:22.674729  117024 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:04:22.674916  117024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:04:22.674975  117024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:04:22.674990  117024 certs.go:256] generating profile certs ...
	I0826 11:04:22.675079  117024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key
	I0826 11:04:22.675113  117024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.2c989aee
	I0826 11:04:22.675135  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.2c989aee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.55 192.168.39.254]
	I0826 11:04:22.976698  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.2c989aee ...
	I0826 11:04:22.976739  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.2c989aee: {Name:mkeb2908f5b47e6d9f85b9f602bb10303a420458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:04:22.976948  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.2c989aee ...
	I0826 11:04:22.976967  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.2c989aee: {Name:mk9f231c451e39cdf747da04fd51f79cf7ff682c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:04:22.977074  117024 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.2c989aee -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt
	I0826 11:04:22.977234  117024 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.2c989aee -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key
	I0826 11:04:22.977398  117024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key
	I0826 11:04:22.977420  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:04:22.977439  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:04:22.977460  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:04:22.977479  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:04:22.977497  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:04:22.977515  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:04:22.977540  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:04:22.977564  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:04:22.977628  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:04:22.977668  117024 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:04:22.977683  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:04:22.977719  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:04:22.977751  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:04:22.977784  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:04:22.977838  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:04:22.977875  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:04:22.977895  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:04:22.977914  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:04:22.977959  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:04:22.981395  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:04:22.981666  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:04:22.981700  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:04:22.981908  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:04:22.982119  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:04:22.982277  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:04:22.982451  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:04:23.055349  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0826 11:04:23.060341  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0826 11:04:23.073292  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0826 11:04:23.077557  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0826 11:04:23.088490  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0826 11:04:23.092368  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0826 11:04:23.103218  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0826 11:04:23.107202  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0826 11:04:23.117560  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0826 11:04:23.121518  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0826 11:04:23.132215  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0826 11:04:23.136539  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0826 11:04:23.147116  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:04:23.171409  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:04:23.194403  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:04:23.218506  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:04:23.242215  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0826 11:04:23.267411  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 11:04:23.293022  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:04:23.317897  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:04:23.342271  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:04:23.367334  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:04:23.393316  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:04:23.419977  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0826 11:04:23.438309  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0826 11:04:23.456566  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0826 11:04:23.473775  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0826 11:04:23.490302  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0826 11:04:23.506585  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0826 11:04:23.522954  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0826 11:04:23.539793  117024 ssh_runner.go:195] Run: openssl version
	I0826 11:04:23.545182  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:04:23.556023  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:04:23.560362  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:04:23.560421  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:04:23.566159  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:04:23.576639  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:04:23.587107  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:04:23.591447  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:04:23.591531  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:04:23.597431  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:04:23.608465  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:04:23.619554  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:04:23.624141  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:04:23.624224  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:04:23.630571  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:04:23.644543  117024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:04:23.648946  117024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 11:04:23.649016  117024 kubeadm.go:934] updating node {m02 192.168.39.55 8443 v1.31.0 crio true true} ...
	I0826 11:04:23.649106  117024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-055395-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:04:23.649133  117024 kube-vip.go:115] generating kube-vip config ...
	I0826 11:04:23.649178  117024 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0826 11:04:23.666133  117024 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0826 11:04:23.666229  117024 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0826 11:04:23.666291  117024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:04:23.676935  117024 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0826 11:04:23.677018  117024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0826 11:04:23.687068  117024 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0826 11:04:23.687097  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0826 11:04:23.687153  117024 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0826 11:04:23.687165  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0826 11:04:23.687181  117024 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0826 11:04:23.692261  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0826 11:04:23.692318  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0826 11:04:24.574583  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0826 11:04:24.574668  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0826 11:04:24.580440  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0826 11:04:24.580492  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0826 11:04:24.793518  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:04:24.834011  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0826 11:04:24.834141  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0826 11:04:24.841051  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0826 11:04:24.841112  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0826 11:04:25.165316  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0826 11:04:25.174892  117024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0826 11:04:25.190811  117024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:04:25.206605  117024 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0826 11:04:25.222691  117024 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0826 11:04:25.226482  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:04:25.238149  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:04:25.353026  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:04:25.369617  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:04:25.370119  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:04:25.370166  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:04:25.386372  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0826 11:04:25.386895  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:04:25.387386  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:04:25.387415  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:04:25.387809  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:04:25.388059  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:04:25.388270  117024 start.go:317] joinCluster: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:04:25.388403  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0826 11:04:25.388426  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:04:25.391396  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:04:25.391851  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:04:25.391879  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:04:25.392055  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:04:25.392326  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:04:25.392509  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:04:25.392691  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:04:25.535560  117024 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:04:25.535616  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token heb248.n7ez3d7n5wzk63lz --discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-055395-m02 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443"
	I0826 11:04:47.746497  117024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token heb248.n7ez3d7n5wzk63lz --discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-055395-m02 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443": (22.210841711s)
	I0826 11:04:47.746559  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0826 11:04:48.284464  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-055395-m02 minikube.k8s.io/updated_at=2024_08_26T11_04_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=ha-055395 minikube.k8s.io/primary=false
	I0826 11:04:48.440122  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-055395-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0826 11:04:48.547073  117024 start.go:319] duration metric: took 23.158795151s to joinCluster
	I0826 11:04:48.547165  117024 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:04:48.547518  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:04:48.548765  117024 out.go:177] * Verifying Kubernetes components...
	I0826 11:04:48.549939  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:04:48.804434  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:04:48.860158  117024 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:04:48.860433  117024 kapi.go:59] client config for ha-055395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key", CAFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0826 11:04:48.860510  117024 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.150:8443
	I0826 11:04:48.860780  117024 node_ready.go:35] waiting up to 6m0s for node "ha-055395-m02" to be "Ready" ...
	I0826 11:04:48.860903  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:48.860913  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:48.860925  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:48.860935  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:48.871060  117024 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0826 11:04:49.361064  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:49.361089  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:49.361099  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:49.361106  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:49.367294  117024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0826 11:04:49.861822  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:49.861860  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:49.861871  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:49.861879  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:49.868555  117024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0826 11:04:50.361911  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:50.361937  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:50.361949  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:50.361955  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:50.366981  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:04:50.861200  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:50.861224  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:50.861232  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:50.861237  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:50.864986  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:50.865469  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:04:51.361694  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:51.361715  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:51.361724  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:51.361729  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:51.365282  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:51.861230  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:51.861255  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:51.861264  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:51.861267  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:51.864976  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:52.361402  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:52.361433  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:52.361445  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:52.361452  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:52.366098  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:04:52.861904  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:52.861935  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:52.861946  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:52.861952  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:52.865462  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:52.865917  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:04:53.361313  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:53.361338  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:53.361345  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:53.361349  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:53.364973  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:53.861451  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:53.861476  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:53.861484  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:53.861488  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:53.865244  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:54.361383  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:54.361410  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:54.361422  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:54.361428  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:54.364518  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:54.861666  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:54.861689  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:54.861698  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:54.861704  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:54.865738  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:04:54.866477  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:04:55.361095  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:55.361119  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:55.361127  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:55.361131  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:55.364567  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:55.861780  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:55.861811  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:55.861822  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:55.861829  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:55.866393  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:04:56.361782  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:56.361811  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:56.361819  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:56.361822  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:56.365252  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:56.861287  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:56.861318  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:56.861330  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:56.861337  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:56.864714  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:57.361948  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:57.361972  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:57.361981  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:57.361986  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:57.365912  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:57.366593  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:04:57.861888  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:57.861915  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:57.861925  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:57.861930  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:57.865634  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:58.361904  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:58.361931  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:58.361941  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:58.361945  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:58.365667  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:58.861853  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:58.861891  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:58.861900  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:58.861907  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:58.865726  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:59.361793  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:59.361823  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:59.361834  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:59.361840  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:59.365832  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:59.861260  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:59.861285  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:59.861294  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:59.861299  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:59.864892  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:59.865370  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:05:00.361267  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:00.361291  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:00.361299  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:00.361305  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:00.365438  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:00.861087  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:00.861114  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:00.861122  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:00.861126  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:00.864819  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:01.361902  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:01.361926  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:01.361936  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:01.361940  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:01.369857  117024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0826 11:05:01.861803  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:01.861828  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:01.861844  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:01.861848  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:01.871050  117024 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0826 11:05:01.871847  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:05:02.361608  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:02.361633  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:02.361642  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:02.361648  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:02.365064  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:02.861963  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:02.861990  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:02.862000  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:02.862006  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:02.865660  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:03.361732  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:03.361755  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:03.361764  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:03.361768  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:03.364737  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:03.861552  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:03.861601  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:03.861614  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:03.861621  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:03.865170  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:04.361105  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:04.361133  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:04.361145  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:04.361152  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:04.368710  117024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0826 11:05:04.369243  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:05:04.861827  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:04.861851  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:04.861859  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:04.861871  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:04.865949  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:05.361138  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:05.361173  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.361182  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.361187  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.365128  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:05.861020  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:05.861043  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.861050  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.861055  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.864793  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:05.865497  117024 node_ready.go:49] node "ha-055395-m02" has status "Ready":"True"
	I0826 11:05:05.865520  117024 node_ready.go:38] duration metric: took 17.004719825s for node "ha-055395-m02" to be "Ready" ...
	I0826 11:05:05.865530  117024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:05:05.865650  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:05.865664  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.865672  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.865675  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.870300  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:05.876702  117024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.876824  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-l9bd4
	I0826 11:05:05.876838  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.876849  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.876853  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.879865  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.880686  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:05.880712  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.880724  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.880733  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.883394  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.883987  117024 pod_ready.go:93] pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:05.884013  117024 pod_ready.go:82] duration metric: took 7.283098ms for pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.884025  117024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.884102  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nxb7s
	I0826 11:05:05.884111  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.884118  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.884121  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.889711  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:05:05.890322  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:05.890337  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.890346  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.890350  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.892694  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.893279  117024 pod_ready.go:93] pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:05.893299  117024 pod_ready.go:82] duration metric: took 9.266073ms for pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.893309  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.893362  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395
	I0826 11:05:05.893369  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.893376  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.893382  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.895591  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.896319  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:05.896337  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.896344  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.896347  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.898519  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.899070  117024 pod_ready.go:93] pod "etcd-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:05.899092  117024 pod_ready.go:82] duration metric: took 5.777255ms for pod "etcd-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.899101  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.899154  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395-m02
	I0826 11:05:05.899161  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.899169  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.899172  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.901532  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.902187  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:05.902203  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.902210  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.902213  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.904416  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.904939  117024 pod_ready.go:93] pod "etcd-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:05.904963  117024 pod_ready.go:82] duration metric: took 5.854431ms for pod "etcd-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.904981  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.061433  117024 request.go:632] Waited for 156.35745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395
	I0826 11:05:06.061501  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395
	I0826 11:05:06.061506  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.061514  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.061519  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.065047  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:06.261049  117024 request.go:632] Waited for 195.314476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:06.261148  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:06.261158  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.261166  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.261170  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.264280  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:06.264795  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:06.264819  117024 pod_ready.go:82] duration metric: took 359.824941ms for pod "kube-apiserver-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.264833  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.461954  117024 request.go:632] Waited for 197.042196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m02
	I0826 11:05:06.462020  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m02
	I0826 11:05:06.462025  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.462033  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.462036  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.466440  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:06.661712  117024 request.go:632] Waited for 194.398891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:06.661794  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:06.661808  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.661823  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.661833  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.665283  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:06.665829  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:06.665851  117024 pod_ready.go:82] duration metric: took 401.010339ms for pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.665864  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.861966  117024 request.go:632] Waited for 196.012019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395
	I0826 11:05:06.862037  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395
	I0826 11:05:06.862045  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.862055  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.862061  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.865261  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.061325  117024 request.go:632] Waited for 195.388402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:07.061387  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:07.061392  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.061400  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.061404  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.064536  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.065236  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:07.065255  117024 pod_ready.go:82] duration metric: took 399.384546ms for pod "kube-controller-manager-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.065265  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.261470  117024 request.go:632] Waited for 196.113192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m02
	I0826 11:05:07.261554  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m02
	I0826 11:05:07.261560  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.261568  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.261573  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.267347  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:05:07.461377  117024 request.go:632] Waited for 193.362458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:07.461461  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:07.461467  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.461476  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.461481  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.464748  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.465458  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:07.465484  117024 pod_ready.go:82] duration metric: took 400.213326ms for pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.465496  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g45pb" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.661597  117024 request.go:632] Waited for 195.989071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g45pb
	I0826 11:05:07.661665  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g45pb
	I0826 11:05:07.661672  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.661682  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.661687  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.665479  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.861508  117024 request.go:632] Waited for 195.342602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:07.861590  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:07.861596  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.861603  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.861609  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.865114  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.865765  117024 pod_ready.go:93] pod "kube-proxy-g45pb" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:07.865792  117024 pod_ready.go:82] duration metric: took 400.284091ms for pod "kube-proxy-g45pb" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.865808  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zl5bm" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.061829  117024 request.go:632] Waited for 195.942501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zl5bm
	I0826 11:05:08.061902  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zl5bm
	I0826 11:05:08.061909  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.061919  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.061931  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.065427  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:08.261431  117024 request.go:632] Waited for 195.392111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:08.261508  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:08.261513  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.261521  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.261525  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.264930  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:08.265453  117024 pod_ready.go:93] pod "kube-proxy-zl5bm" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:08.265474  117024 pod_ready.go:82] duration metric: took 399.656236ms for pod "kube-proxy-zl5bm" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.265485  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.461636  117024 request.go:632] Waited for 196.077133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395
	I0826 11:05:08.461727  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395
	I0826 11:05:08.461734  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.461743  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.461748  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.465553  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:08.661574  117024 request.go:632] Waited for 195.266587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:08.661661  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:08.661679  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.661701  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.661723  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.666146  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:08.666746  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:08.666774  117024 pod_ready.go:82] duration metric: took 401.281947ms for pod "kube-scheduler-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.666789  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.861810  117024 request.go:632] Waited for 194.923664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m02
	I0826 11:05:08.861893  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m02
	I0826 11:05:08.861902  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.861915  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.861920  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.866150  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:09.061108  117024 request.go:632] Waited for 194.349918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:09.061183  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:09.061190  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.061198  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.061201  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.065073  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:09.065770  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:09.065788  117024 pod_ready.go:82] duration metric: took 398.991846ms for pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:09.065799  117024 pod_ready.go:39] duration metric: took 3.200230423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:05:09.065819  117024 api_server.go:52] waiting for apiserver process to appear ...
	I0826 11:05:09.065872  117024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:05:09.081270  117024 api_server.go:72] duration metric: took 20.534056416s to wait for apiserver process to appear ...
	I0826 11:05:09.081304  117024 api_server.go:88] waiting for apiserver healthz status ...
	I0826 11:05:09.081329  117024 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0826 11:05:09.088100  117024 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0826 11:05:09.088179  117024 round_trippers.go:463] GET https://192.168.39.150:8443/version
	I0826 11:05:09.088191  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.088200  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.088206  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.089274  117024 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0826 11:05:09.089389  117024 api_server.go:141] control plane version: v1.31.0
	I0826 11:05:09.089407  117024 api_server.go:131] duration metric: took 8.095684ms to wait for apiserver health ...
	I0826 11:05:09.089415  117024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 11:05:09.261829  117024 request.go:632] Waited for 172.333523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:09.261895  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:09.261900  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.261913  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.261917  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.269367  117024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0826 11:05:09.273926  117024 system_pods.go:59] 17 kube-system pods found
	I0826 11:05:09.273962  117024 system_pods.go:61] "coredns-6f6b679f8f-l9bd4" [087dd322-a382-40bc-b631-5744d64ee6b6] Running
	I0826 11:05:09.273969  117024 system_pods.go:61] "coredns-6f6b679f8f-nxb7s" [80b1f99e-a6b9-452f-9e21-b0df08325d56] Running
	I0826 11:05:09.273972  117024 system_pods.go:61] "etcd-ha-055395" [28419734-e4da-4ec0-a7db-0094855feac2] Running
	I0826 11:05:09.273976  117024 system_pods.go:61] "etcd-ha-055395-m02" [9ce0c9b5-4072-4ea1-b326-d7b8b78b578d] Running
	I0826 11:05:09.273979  117024 system_pods.go:61] "kindnet-js2cb" [3364fb33-1685-4137-a94a-b237b8ceb9c6] Running
	I0826 11:05:09.273982  117024 system_pods.go:61] "kindnet-z2rh2" [f1df8e80-62b7-4a0a-b61a-135b907c101d] Running
	I0826 11:05:09.273985  117024 system_pods.go:61] "kube-apiserver-ha-055395" [2bd78c6d-3ad6-4064-a59b-ade12f446056] Running
	I0826 11:05:09.273991  117024 system_pods.go:61] "kube-apiserver-ha-055395-m02" [9fbaba21-92b7-46e3-8840-9422e4206f59] Running
	I0826 11:05:09.273994  117024 system_pods.go:61] "kube-controller-manager-ha-055395" [3fce2abe-e401-4c5b-8e0e-53c85390ac76] Running
	I0826 11:05:09.273996  117024 system_pods.go:61] "kube-controller-manager-ha-055395-m02" [4c9f6ebc-407a-4383-bf5f-0c91903ba213] Running
	I0826 11:05:09.273999  117024 system_pods.go:61] "kube-proxy-g45pb" [0e2dc897-60b1-4d06-a4e4-30136a39a224] Running
	I0826 11:05:09.274001  117024 system_pods.go:61] "kube-proxy-zl5bm" [bed428b3-57e8-4704-a1fd-b3db1b3e4d6c] Running
	I0826 11:05:09.274004  117024 system_pods.go:61] "kube-scheduler-ha-055395" [6ce30f64-767d-422b-8bf7-40ebc2179dcb] Running
	I0826 11:05:09.274008  117024 system_pods.go:61] "kube-scheduler-ha-055395-m02" [4d95a077-6a4d-4639-bb52-58b369107c66] Running
	I0826 11:05:09.274011  117024 system_pods.go:61] "kube-vip-ha-055395" [72a93d75-67e0-4605-81c3-f1ed830fd5eb] Running
	I0826 11:05:09.274014  117024 system_pods.go:61] "kube-vip-ha-055395-m02" [14132392-e3db-4ad5-b608-ed22e36d856b] Running
	I0826 11:05:09.274017  117024 system_pods.go:61] "storage-provisioner" [5bf3fea9-2562-4769-944b-72472da24419] Running
	I0826 11:05:09.274024  117024 system_pods.go:74] duration metric: took 184.602023ms to wait for pod list to return data ...
	I0826 11:05:09.274032  117024 default_sa.go:34] waiting for default service account to be created ...
	I0826 11:05:09.461497  117024 request.go:632] Waited for 187.376448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0826 11:05:09.461558  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0826 11:05:09.461565  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.461575  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.461583  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.465682  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:09.465913  117024 default_sa.go:45] found service account: "default"
	I0826 11:05:09.465932  117024 default_sa.go:55] duration metric: took 191.891229ms for default service account to be created ...
	I0826 11:05:09.465943  117024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 11:05:09.661105  117024 request.go:632] Waited for 195.09125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:09.661182  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:09.661188  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.661209  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.661216  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.665620  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:09.671565  117024 system_pods.go:86] 17 kube-system pods found
	I0826 11:05:09.671606  117024 system_pods.go:89] "coredns-6f6b679f8f-l9bd4" [087dd322-a382-40bc-b631-5744d64ee6b6] Running
	I0826 11:05:09.671615  117024 system_pods.go:89] "coredns-6f6b679f8f-nxb7s" [80b1f99e-a6b9-452f-9e21-b0df08325d56] Running
	I0826 11:05:09.671619  117024 system_pods.go:89] "etcd-ha-055395" [28419734-e4da-4ec0-a7db-0094855feac2] Running
	I0826 11:05:09.671624  117024 system_pods.go:89] "etcd-ha-055395-m02" [9ce0c9b5-4072-4ea1-b326-d7b8b78b578d] Running
	I0826 11:05:09.671628  117024 system_pods.go:89] "kindnet-js2cb" [3364fb33-1685-4137-a94a-b237b8ceb9c6] Running
	I0826 11:05:09.671632  117024 system_pods.go:89] "kindnet-z2rh2" [f1df8e80-62b7-4a0a-b61a-135b907c101d] Running
	I0826 11:05:09.671636  117024 system_pods.go:89] "kube-apiserver-ha-055395" [2bd78c6d-3ad6-4064-a59b-ade12f446056] Running
	I0826 11:05:09.671639  117024 system_pods.go:89] "kube-apiserver-ha-055395-m02" [9fbaba21-92b7-46e3-8840-9422e4206f59] Running
	I0826 11:05:09.671643  117024 system_pods.go:89] "kube-controller-manager-ha-055395" [3fce2abe-e401-4c5b-8e0e-53c85390ac76] Running
	I0826 11:05:09.671648  117024 system_pods.go:89] "kube-controller-manager-ha-055395-m02" [4c9f6ebc-407a-4383-bf5f-0c91903ba213] Running
	I0826 11:05:09.671652  117024 system_pods.go:89] "kube-proxy-g45pb" [0e2dc897-60b1-4d06-a4e4-30136a39a224] Running
	I0826 11:05:09.671657  117024 system_pods.go:89] "kube-proxy-zl5bm" [bed428b3-57e8-4704-a1fd-b3db1b3e4d6c] Running
	I0826 11:05:09.671661  117024 system_pods.go:89] "kube-scheduler-ha-055395" [6ce30f64-767d-422b-8bf7-40ebc2179dcb] Running
	I0826 11:05:09.671668  117024 system_pods.go:89] "kube-scheduler-ha-055395-m02" [4d95a077-6a4d-4639-bb52-58b369107c66] Running
	I0826 11:05:09.671671  117024 system_pods.go:89] "kube-vip-ha-055395" [72a93d75-67e0-4605-81c3-f1ed830fd5eb] Running
	I0826 11:05:09.671674  117024 system_pods.go:89] "kube-vip-ha-055395-m02" [14132392-e3db-4ad5-b608-ed22e36d856b] Running
	I0826 11:05:09.671678  117024 system_pods.go:89] "storage-provisioner" [5bf3fea9-2562-4769-944b-72472da24419] Running
	I0826 11:05:09.671685  117024 system_pods.go:126] duration metric: took 205.736594ms to wait for k8s-apps to be running ...
	I0826 11:05:09.671694  117024 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 11:05:09.671752  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:05:09.689100  117024 system_svc.go:56] duration metric: took 17.383966ms WaitForService to wait for kubelet
	I0826 11:05:09.689135  117024 kubeadm.go:582] duration metric: took 21.141926576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:05:09.689159  117024 node_conditions.go:102] verifying NodePressure condition ...
	I0826 11:05:09.861889  117024 request.go:632] Waited for 172.626501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes
	I0826 11:05:09.861954  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes
	I0826 11:05:09.861960  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.861973  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.861980  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.865779  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:09.866767  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:05:09.866794  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:05:09.866806  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:05:09.866809  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:05:09.866813  117024 node_conditions.go:105] duration metric: took 177.648393ms to run NodePressure ...
	I0826 11:05:09.866827  117024 start.go:241] waiting for startup goroutines ...
	I0826 11:05:09.866865  117024 start.go:255] writing updated cluster config ...
	I0826 11:05:09.869315  117024 out.go:201] 
	I0826 11:05:09.871104  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:05:09.871207  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:05:09.872908  117024 out.go:177] * Starting "ha-055395-m03" control-plane node in "ha-055395" cluster
	I0826 11:05:09.874141  117024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:05:09.874169  117024 cache.go:56] Caching tarball of preloaded images
	I0826 11:05:09.874292  117024 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:05:09.874308  117024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:05:09.874398  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:05:09.874604  117024 start.go:360] acquireMachinesLock for ha-055395-m03: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:05:09.874657  117024 start.go:364] duration metric: took 31.281µs to acquireMachinesLock for "ha-055395-m03"
	I0826 11:05:09.874684  117024 start.go:93] Provisioning new machine with config: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:05:09.874790  117024 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0826 11:05:09.876597  117024 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 11:05:09.876696  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:05:09.876739  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:05:09.894431  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39947
	I0826 11:05:09.895003  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:05:09.895611  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:05:09.895635  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:05:09.895980  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:05:09.896192  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetMachineName
	I0826 11:05:09.896372  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:09.896568  117024 start.go:159] libmachine.API.Create for "ha-055395" (driver="kvm2")
	I0826 11:05:09.896607  117024 client.go:168] LocalClient.Create starting
	I0826 11:05:09.896645  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 11:05:09.896691  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:05:09.896718  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:05:09.896795  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 11:05:09.896842  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:05:09.896854  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:05:09.896873  117024 main.go:141] libmachine: Running pre-create checks...
	I0826 11:05:09.896881  117024 main.go:141] libmachine: (ha-055395-m03) Calling .PreCreateCheck
	I0826 11:05:09.897088  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetConfigRaw
	I0826 11:05:09.897544  117024 main.go:141] libmachine: Creating machine...
	I0826 11:05:09.897560  117024 main.go:141] libmachine: (ha-055395-m03) Calling .Create
	I0826 11:05:09.897707  117024 main.go:141] libmachine: (ha-055395-m03) Creating KVM machine...
	I0826 11:05:09.899194  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found existing default KVM network
	I0826 11:05:09.899385  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found existing private KVM network mk-ha-055395
	I0826 11:05:09.899621  117024 main.go:141] libmachine: (ha-055395-m03) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03 ...
	I0826 11:05:09.899645  117024 main.go:141] libmachine: (ha-055395-m03) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 11:05:09.899762  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:09.899614  117790 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:05:09.899860  117024 main.go:141] libmachine: (ha-055395-m03) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 11:05:10.156303  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:10.156140  117790 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa...
	I0826 11:05:10.428332  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:10.428217  117790 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/ha-055395-m03.rawdisk...
	I0826 11:05:10.428366  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Writing magic tar header
	I0826 11:05:10.428381  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Writing SSH key tar header
	I0826 11:05:10.428400  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:10.428339  117790 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03 ...
	I0826 11:05:10.428518  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03
	I0826 11:05:10.428548  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03 (perms=drwx------)
	I0826 11:05:10.428559  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 11:05:10.428572  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:05:10.428581  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 11:05:10.428596  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 11:05:10.428608  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 11:05:10.428621  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 11:05:10.428632  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 11:05:10.428648  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 11:05:10.428660  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 11:05:10.428673  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins
	I0826 11:05:10.428684  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home
	I0826 11:05:10.428695  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Skipping /home - not owner
	I0826 11:05:10.428734  117024 main.go:141] libmachine: (ha-055395-m03) Creating domain...
	I0826 11:05:10.429624  117024 main.go:141] libmachine: (ha-055395-m03) define libvirt domain using xml: 
	I0826 11:05:10.429647  117024 main.go:141] libmachine: (ha-055395-m03) <domain type='kvm'>
	I0826 11:05:10.429656  117024 main.go:141] libmachine: (ha-055395-m03)   <name>ha-055395-m03</name>
	I0826 11:05:10.429663  117024 main.go:141] libmachine: (ha-055395-m03)   <memory unit='MiB'>2200</memory>
	I0826 11:05:10.429672  117024 main.go:141] libmachine: (ha-055395-m03)   <vcpu>2</vcpu>
	I0826 11:05:10.429680  117024 main.go:141] libmachine: (ha-055395-m03)   <features>
	I0826 11:05:10.429693  117024 main.go:141] libmachine: (ha-055395-m03)     <acpi/>
	I0826 11:05:10.429700  117024 main.go:141] libmachine: (ha-055395-m03)     <apic/>
	I0826 11:05:10.429720  117024 main.go:141] libmachine: (ha-055395-m03)     <pae/>
	I0826 11:05:10.429728  117024 main.go:141] libmachine: (ha-055395-m03)     
	I0826 11:05:10.429734  117024 main.go:141] libmachine: (ha-055395-m03)   </features>
	I0826 11:05:10.429738  117024 main.go:141] libmachine: (ha-055395-m03)   <cpu mode='host-passthrough'>
	I0826 11:05:10.429743  117024 main.go:141] libmachine: (ha-055395-m03)   
	I0826 11:05:10.429749  117024 main.go:141] libmachine: (ha-055395-m03)   </cpu>
	I0826 11:05:10.429754  117024 main.go:141] libmachine: (ha-055395-m03)   <os>
	I0826 11:05:10.429759  117024 main.go:141] libmachine: (ha-055395-m03)     <type>hvm</type>
	I0826 11:05:10.429767  117024 main.go:141] libmachine: (ha-055395-m03)     <boot dev='cdrom'/>
	I0826 11:05:10.429784  117024 main.go:141] libmachine: (ha-055395-m03)     <boot dev='hd'/>
	I0826 11:05:10.429794  117024 main.go:141] libmachine: (ha-055395-m03)     <bootmenu enable='no'/>
	I0826 11:05:10.429806  117024 main.go:141] libmachine: (ha-055395-m03)   </os>
	I0826 11:05:10.429814  117024 main.go:141] libmachine: (ha-055395-m03)   <devices>
	I0826 11:05:10.429821  117024 main.go:141] libmachine: (ha-055395-m03)     <disk type='file' device='cdrom'>
	I0826 11:05:10.429833  117024 main.go:141] libmachine: (ha-055395-m03)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/boot2docker.iso'/>
	I0826 11:05:10.429843  117024 main.go:141] libmachine: (ha-055395-m03)       <target dev='hdc' bus='scsi'/>
	I0826 11:05:10.429849  117024 main.go:141] libmachine: (ha-055395-m03)       <readonly/>
	I0826 11:05:10.429857  117024 main.go:141] libmachine: (ha-055395-m03)     </disk>
	I0826 11:05:10.429870  117024 main.go:141] libmachine: (ha-055395-m03)     <disk type='file' device='disk'>
	I0826 11:05:10.429882  117024 main.go:141] libmachine: (ha-055395-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 11:05:10.429893  117024 main.go:141] libmachine: (ha-055395-m03)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/ha-055395-m03.rawdisk'/>
	I0826 11:05:10.429908  117024 main.go:141] libmachine: (ha-055395-m03)       <target dev='hda' bus='virtio'/>
	I0826 11:05:10.429920  117024 main.go:141] libmachine: (ha-055395-m03)     </disk>
	I0826 11:05:10.429928  117024 main.go:141] libmachine: (ha-055395-m03)     <interface type='network'>
	I0826 11:05:10.429937  117024 main.go:141] libmachine: (ha-055395-m03)       <source network='mk-ha-055395'/>
	I0826 11:05:10.429947  117024 main.go:141] libmachine: (ha-055395-m03)       <model type='virtio'/>
	I0826 11:05:10.429955  117024 main.go:141] libmachine: (ha-055395-m03)     </interface>
	I0826 11:05:10.429965  117024 main.go:141] libmachine: (ha-055395-m03)     <interface type='network'>
	I0826 11:05:10.429972  117024 main.go:141] libmachine: (ha-055395-m03)       <source network='default'/>
	I0826 11:05:10.429979  117024 main.go:141] libmachine: (ha-055395-m03)       <model type='virtio'/>
	I0826 11:05:10.429986  117024 main.go:141] libmachine: (ha-055395-m03)     </interface>
	I0826 11:05:10.429999  117024 main.go:141] libmachine: (ha-055395-m03)     <serial type='pty'>
	I0826 11:05:10.430042  117024 main.go:141] libmachine: (ha-055395-m03)       <target port='0'/>
	I0826 11:05:10.430067  117024 main.go:141] libmachine: (ha-055395-m03)     </serial>
	I0826 11:05:10.430077  117024 main.go:141] libmachine: (ha-055395-m03)     <console type='pty'>
	I0826 11:05:10.430092  117024 main.go:141] libmachine: (ha-055395-m03)       <target type='serial' port='0'/>
	I0826 11:05:10.430103  117024 main.go:141] libmachine: (ha-055395-m03)     </console>
	I0826 11:05:10.430111  117024 main.go:141] libmachine: (ha-055395-m03)     <rng model='virtio'>
	I0826 11:05:10.430123  117024 main.go:141] libmachine: (ha-055395-m03)       <backend model='random'>/dev/random</backend>
	I0826 11:05:10.430133  117024 main.go:141] libmachine: (ha-055395-m03)     </rng>
	I0826 11:05:10.430142  117024 main.go:141] libmachine: (ha-055395-m03)     
	I0826 11:05:10.430151  117024 main.go:141] libmachine: (ha-055395-m03)     
	I0826 11:05:10.430159  117024 main.go:141] libmachine: (ha-055395-m03)   </devices>
	I0826 11:05:10.430173  117024 main.go:141] libmachine: (ha-055395-m03) </domain>
	I0826 11:05:10.430183  117024 main.go:141] libmachine: (ha-055395-m03) 
	I0826 11:05:10.437631  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:af:f5:37 in network default
	I0826 11:05:10.438408  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:10.438437  117024 main.go:141] libmachine: (ha-055395-m03) Ensuring networks are active...
	I0826 11:05:10.439282  117024 main.go:141] libmachine: (ha-055395-m03) Ensuring network default is active
	I0826 11:05:10.439697  117024 main.go:141] libmachine: (ha-055395-m03) Ensuring network mk-ha-055395 is active
	I0826 11:05:10.440082  117024 main.go:141] libmachine: (ha-055395-m03) Getting domain xml...
	I0826 11:05:10.440757  117024 main.go:141] libmachine: (ha-055395-m03) Creating domain...
	I0826 11:05:11.695519  117024 main.go:141] libmachine: (ha-055395-m03) Waiting to get IP...
	I0826 11:05:11.696382  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:11.696893  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:11.696927  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:11.696862  117790 retry.go:31] will retry after 237.697037ms: waiting for machine to come up
	I0826 11:05:11.936330  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:11.936843  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:11.936875  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:11.936805  117790 retry.go:31] will retry after 256.411063ms: waiting for machine to come up
	I0826 11:05:12.195253  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:12.195710  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:12.195735  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:12.195662  117790 retry.go:31] will retry after 410.928155ms: waiting for machine to come up
	I0826 11:05:12.608313  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:12.608816  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:12.608849  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:12.608750  117790 retry.go:31] will retry after 450.604024ms: waiting for machine to come up
	I0826 11:05:13.061050  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:13.061544  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:13.061583  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:13.061484  117790 retry.go:31] will retry after 526.801583ms: waiting for machine to come up
	I0826 11:05:13.590087  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:13.590593  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:13.590620  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:13.590552  117790 retry.go:31] will retry after 849.29226ms: waiting for machine to come up
	I0826 11:05:14.441473  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:14.441829  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:14.441859  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:14.441776  117790 retry.go:31] will retry after 1.189728783s: waiting for machine to come up
	I0826 11:05:15.633195  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:15.633639  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:15.633669  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:15.633588  117790 retry.go:31] will retry after 1.199187401s: waiting for machine to come up
	I0826 11:05:16.835147  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:16.835662  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:16.835704  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:16.835620  117790 retry.go:31] will retry after 1.739710221s: waiting for machine to come up
	I0826 11:05:18.576454  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:18.576874  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:18.576897  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:18.576826  117790 retry.go:31] will retry after 2.199446152s: waiting for machine to come up
	I0826 11:05:20.778273  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:20.778823  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:20.778875  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:20.778757  117790 retry.go:31] will retry after 2.636484153s: waiting for machine to come up
	I0826 11:05:23.416998  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:23.417588  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:23.417611  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:23.417518  117790 retry.go:31] will retry after 3.455957799s: waiting for machine to come up
	I0826 11:05:26.876008  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:26.876560  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:26.876586  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:26.876513  117790 retry.go:31] will retry after 4.202229574s: waiting for machine to come up
	I0826 11:05:31.080465  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.081003  117024 main.go:141] libmachine: (ha-055395-m03) Found IP for machine: 192.168.39.209
	I0826 11:05:31.081029  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has current primary IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.081037  117024 main.go:141] libmachine: (ha-055395-m03) Reserving static IP address...
	I0826 11:05:31.081461  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find host DHCP lease matching {name: "ha-055395-m03", mac: "52:54:00:66:85:18", ip: "192.168.39.209"} in network mk-ha-055395
	I0826 11:05:31.166774  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Getting to WaitForSSH function...
	I0826 11:05:31.166804  117024 main.go:141] libmachine: (ha-055395-m03) Reserved static IP address: 192.168.39.209
	I0826 11:05:31.166821  117024 main.go:141] libmachine: (ha-055395-m03) Waiting for SSH to be available...
	I0826 11:05:31.170060  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.170532  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.170562  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.170722  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Using SSH client type: external
	I0826 11:05:31.170753  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa (-rw-------)
	I0826 11:05:31.170787  117024 main.go:141] libmachine: (ha-055395-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:05:31.170806  117024 main.go:141] libmachine: (ha-055395-m03) DBG | About to run SSH command:
	I0826 11:05:31.170821  117024 main.go:141] libmachine: (ha-055395-m03) DBG | exit 0
	I0826 11:05:31.299210  117024 main.go:141] libmachine: (ha-055395-m03) DBG | SSH cmd err, output: <nil>: 
	I0826 11:05:31.299491  117024 main.go:141] libmachine: (ha-055395-m03) KVM machine creation complete!
	I0826 11:05:31.299798  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetConfigRaw
	I0826 11:05:31.300673  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:31.300901  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:31.301145  117024 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 11:05:31.301162  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:05:31.302529  117024 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 11:05:31.302542  117024 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 11:05:31.302548  117024 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 11:05:31.302554  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.304944  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.305403  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.305439  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.305607  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:31.305821  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.306032  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.306190  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:31.306379  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:31.306653  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:31.306670  117024 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 11:05:31.418170  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:05:31.418208  117024 main.go:141] libmachine: Detecting the provisioner...
	I0826 11:05:31.418219  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.421287  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.421743  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.421770  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.422108  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:31.422320  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.422524  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.422622  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:31.422860  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:31.423114  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:31.423131  117024 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 11:05:31.539362  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 11:05:31.539439  117024 main.go:141] libmachine: found compatible host: buildroot
	I0826 11:05:31.539453  117024 main.go:141] libmachine: Provisioning with buildroot...
	I0826 11:05:31.539466  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetMachineName
	I0826 11:05:31.539715  117024 buildroot.go:166] provisioning hostname "ha-055395-m03"
	I0826 11:05:31.539744  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetMachineName
	I0826 11:05:31.539963  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.542762  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.543219  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.543248  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.543412  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:31.543603  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.543797  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.543921  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:31.544113  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:31.544284  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:31.544295  117024 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-055395-m03 && echo "ha-055395-m03" | sudo tee /etc/hostname
	I0826 11:05:31.673405  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395-m03
	
	I0826 11:05:31.673443  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.676254  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.676636  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.676665  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.676869  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:31.677060  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.677174  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.677269  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:31.677477  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:31.677705  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:31.677729  117024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-055395-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-055395-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-055395-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:05:31.803218  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:05:31.803255  117024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:05:31.803279  117024 buildroot.go:174] setting up certificates
	I0826 11:05:31.803293  117024 provision.go:84] configureAuth start
	I0826 11:05:31.803307  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetMachineName
	I0826 11:05:31.803594  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:05:31.806568  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.807033  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.807081  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.807234  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.809692  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.810167  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.810199  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.810447  117024 provision.go:143] copyHostCerts
	I0826 11:05:31.810481  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:05:31.810515  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:05:31.810531  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:05:31.810595  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:05:31.810684  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:05:31.810700  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:05:31.810708  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:05:31.810730  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:05:31.810782  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:05:31.810801  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:05:31.810805  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:05:31.810826  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:05:31.810923  117024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.ha-055395-m03 san=[127.0.0.1 192.168.39.209 ha-055395-m03 localhost minikube]
	I0826 11:05:32.024003  117024 provision.go:177] copyRemoteCerts
	I0826 11:05:32.024067  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:05:32.024092  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.027083  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.027444  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.027476  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.027719  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.027959  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.028159  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.028298  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:05:32.115106  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:05:32.115186  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:05:32.141709  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:05:32.141798  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:05:32.168738  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:05:32.168829  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0826 11:05:32.195050  117024 provision.go:87] duration metric: took 391.740494ms to configureAuth
	I0826 11:05:32.195084  117024 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:05:32.195329  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:05:32.195425  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.198753  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.199161  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.199192  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.199445  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.199738  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.199950  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.200106  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.200319  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:32.200499  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:32.200520  117024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:05:32.477056  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:05:32.477107  117024 main.go:141] libmachine: Checking connection to Docker...
	I0826 11:05:32.477119  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetURL
	I0826 11:05:32.478455  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Using libvirt version 6000000
	I0826 11:05:32.480827  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.481167  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.481206  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.481390  117024 main.go:141] libmachine: Docker is up and running!
	I0826 11:05:32.481405  117024 main.go:141] libmachine: Reticulating splines...
	I0826 11:05:32.481412  117024 client.go:171] duration metric: took 22.584796254s to LocalClient.Create
	I0826 11:05:32.481434  117024 start.go:167] duration metric: took 22.584868827s to libmachine.API.Create "ha-055395"
	I0826 11:05:32.481447  117024 start.go:293] postStartSetup for "ha-055395-m03" (driver="kvm2")
	I0826 11:05:32.481465  117024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:05:32.481482  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.481717  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:05:32.481750  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.483864  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.484149  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.484173  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.484353  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.484506  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.484696  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.484848  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:05:32.574510  117024 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:05:32.578537  117024 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:05:32.578622  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:05:32.578708  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:05:32.578807  117024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:05:32.578819  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:05:32.578969  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:05:32.588741  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:05:32.613612  117024 start.go:296] duration metric: took 132.146042ms for postStartSetup
	I0826 11:05:32.613670  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetConfigRaw
	I0826 11:05:32.614355  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:05:32.617168  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.617555  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.617599  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.617883  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:05:32.618131  117024 start.go:128] duration metric: took 22.743325947s to createHost
	I0826 11:05:32.618160  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.620518  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.620827  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.620853  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.621046  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.621303  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.621476  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.621603  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.621759  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:32.622000  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:32.622011  117024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:05:32.735826  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670332.710972782
	
	I0826 11:05:32.735850  117024 fix.go:216] guest clock: 1724670332.710972782
	I0826 11:05:32.735857  117024 fix.go:229] Guest: 2024-08-26 11:05:32.710972782 +0000 UTC Remote: 2024-08-26 11:05:32.618147148 +0000 UTC m=+142.821673052 (delta=92.825634ms)
	I0826 11:05:32.735876  117024 fix.go:200] guest clock delta is within tolerance: 92.825634ms
	I0826 11:05:32.735883  117024 start.go:83] releasing machines lock for "ha-055395-m03", held for 22.861213322s
	I0826 11:05:32.735903  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.736171  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:05:32.738728  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.739235  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.739265  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.741921  117024 out.go:177] * Found network options:
	I0826 11:05:32.743431  117024 out.go:177]   - NO_PROXY=192.168.39.150,192.168.39.55
	W0826 11:05:32.744862  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	W0826 11:05:32.744896  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0826 11:05:32.744918  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.745727  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.746039  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.746178  117024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:05:32.746228  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	W0826 11:05:32.746279  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	W0826 11:05:32.746304  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0826 11:05:32.746379  117024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:05:32.746404  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.749366  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.749396  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.749791  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.749839  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.749868  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.749924  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.750117  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.750205  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.750307  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.750383  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.750447  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.750501  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.750560  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:05:32.750740  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:05:32.985275  117024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:05:32.991074  117024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:05:32.991147  117024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:05:33.008497  117024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:05:33.008543  117024 start.go:495] detecting cgroup driver to use...
	I0826 11:05:33.008624  117024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:05:33.024905  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:05:33.039390  117024 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:05:33.039463  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:05:33.053838  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:05:33.069329  117024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:05:33.183597  117024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:05:33.332337  117024 docker.go:233] disabling docker service ...
	I0826 11:05:33.332404  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:05:33.348908  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:05:33.362319  117024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:05:33.523528  117024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:05:33.640144  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:05:33.654456  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:05:33.672799  117024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:05:33.672862  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.683357  117024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:05:33.683444  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.693488  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.703741  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.715187  117024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:05:33.726366  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.736814  117024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.755067  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.765140  117024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:05:33.773974  117024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:05:33.774037  117024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:05:33.788271  117024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:05:33.798628  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:05:33.916852  117024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:05:34.055809  117024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:05:34.055894  117024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:05:34.060534  117024 start.go:563] Will wait 60s for crictl version
	I0826 11:05:34.060630  117024 ssh_runner.go:195] Run: which crictl
	I0826 11:05:34.065113  117024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:05:34.112089  117024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:05:34.112197  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:05:34.141440  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:05:34.172725  117024 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:05:34.174111  117024 out.go:177]   - env NO_PROXY=192.168.39.150
	I0826 11:05:34.175759  117024 out.go:177]   - env NO_PROXY=192.168.39.150,192.168.39.55
	I0826 11:05:34.177146  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:05:34.180269  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:34.180633  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:34.180659  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:34.180902  117024 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:05:34.185305  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:05:34.199422  117024 mustload.go:65] Loading cluster: ha-055395
	I0826 11:05:34.199654  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:05:34.199969  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:05:34.200013  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:05:34.215420  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I0826 11:05:34.215882  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:05:34.216354  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:05:34.216373  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:05:34.216745  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:05:34.216992  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:05:34.218814  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:05:34.219196  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:05:34.219237  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:05:34.235151  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0826 11:05:34.235583  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:05:34.236080  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:05:34.236106  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:05:34.236513  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:05:34.236720  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:05:34.236886  117024 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395 for IP: 192.168.39.209
	I0826 11:05:34.236897  117024 certs.go:194] generating shared ca certs ...
	I0826 11:05:34.236912  117024 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:05:34.237039  117024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:05:34.237074  117024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:05:34.237082  117024 certs.go:256] generating profile certs ...
	I0826 11:05:34.237147  117024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key
	I0826 11:05:34.237169  117024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.7a1bfba6
	I0826 11:05:34.237187  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.7a1bfba6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.55 192.168.39.209 192.168.39.254]
	I0826 11:05:34.313323  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.7a1bfba6 ...
	I0826 11:05:34.313359  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.7a1bfba6: {Name:mk2be64c493d0f3fd7053f7cbe68fe5aba7b8425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:05:34.313533  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.7a1bfba6 ...
	I0826 11:05:34.313546  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.7a1bfba6: {Name:mkfe2613899429ae81d12c212dcf29a172aaaeaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:05:34.313619  117024 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.7a1bfba6 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt
	I0826 11:05:34.313750  117024 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.7a1bfba6 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key
	I0826 11:05:34.313877  117024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key
	I0826 11:05:34.313893  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:05:34.313906  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:05:34.313919  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:05:34.313932  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:05:34.313944  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:05:34.313955  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:05:34.313967  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:05:34.313978  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:05:34.314030  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:05:34.314056  117024 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:05:34.314065  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:05:34.314085  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:05:34.314105  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:05:34.314127  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:05:34.314165  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:05:34.314189  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:05:34.314202  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:05:34.314214  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:05:34.314247  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:05:34.317454  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:05:34.317952  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:05:34.317989  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:05:34.318174  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:05:34.318385  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:05:34.318631  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:05:34.318816  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:05:34.391327  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0826 11:05:34.397068  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0826 11:05:34.409713  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0826 11:05:34.414388  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0826 11:05:34.425537  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0826 11:05:34.429552  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0826 11:05:34.440715  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0826 11:05:34.445267  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0826 11:05:34.456636  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0826 11:05:34.461124  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0826 11:05:34.472765  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0826 11:05:34.477157  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0826 11:05:34.488224  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:05:34.513163  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:05:34.537621  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:05:34.563079  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:05:34.587778  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0826 11:05:34.612232  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 11:05:34.636366  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:05:34.661605  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:05:34.686530  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:05:34.711512  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:05:34.737635  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:05:34.761710  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0826 11:05:34.779591  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0826 11:05:34.797498  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0826 11:05:34.814490  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0826 11:05:34.831393  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0826 11:05:34.848281  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0826 11:05:34.865337  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0826 11:05:34.882381  117024 ssh_runner.go:195] Run: openssl version
	I0826 11:05:34.888002  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:05:34.899074  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:05:34.904128  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:05:34.904238  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:05:34.909727  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:05:34.920094  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:05:34.930409  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:05:34.934934  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:05:34.934990  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:05:34.940830  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:05:34.952681  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:05:34.965758  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:05:34.970440  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:05:34.970496  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:05:34.976185  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:05:34.989290  117024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:05:34.993982  117024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 11:05:34.994063  117024 kubeadm.go:934] updating node {m03 192.168.39.209 8443 v1.31.0 crio true true} ...
	I0826 11:05:34.994152  117024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-055395-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:05:34.994177  117024 kube-vip.go:115] generating kube-vip config ...
	I0826 11:05:34.994222  117024 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0826 11:05:35.010372  117024 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0826 11:05:35.010476  117024 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0826 11:05:35.010556  117024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:05:35.020648  117024 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0826 11:05:35.020797  117024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0826 11:05:35.031858  117024 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0826 11:05:35.031859  117024 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0826 11:05:35.031897  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0826 11:05:35.031896  117024 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0826 11:05:35.031913  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0826 11:05:35.031943  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:05:35.031966  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0826 11:05:35.031971  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0826 11:05:35.041418  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0826 11:05:35.041453  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0826 11:05:35.056955  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0826 11:05:35.056980  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0826 11:05:35.057019  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0826 11:05:35.057062  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0826 11:05:35.107605  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0826 11:05:35.107665  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0826 11:05:35.934172  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0826 11:05:35.944020  117024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0826 11:05:35.960999  117024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:05:35.978215  117024 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0826 11:05:35.996039  117024 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0826 11:05:36.000425  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:05:36.013711  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:05:36.146677  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:05:36.166818  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:05:36.167336  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:05:36.167392  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:05:36.184634  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I0826 11:05:36.185060  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:05:36.185590  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:05:36.185610  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:05:36.185954  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:05:36.186174  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:05:36.186335  117024 start.go:317] joinCluster: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:05:36.186467  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0826 11:05:36.186482  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:05:36.189192  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:05:36.189657  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:05:36.189691  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:05:36.189895  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:05:36.190073  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:05:36.190274  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:05:36.190439  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:05:36.347817  117024 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:05:36.347886  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lm9l3u.n05vhvc2b02519dh --discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-055395-m03 --control-plane --apiserver-advertise-address=192.168.39.209 --apiserver-bind-port=8443"
	I0826 11:05:59.051708  117024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lm9l3u.n05vhvc2b02519dh --discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-055395-m03 --control-plane --apiserver-advertise-address=192.168.39.209 --apiserver-bind-port=8443": (22.703790459s)
	I0826 11:05:59.051757  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0826 11:05:59.640986  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-055395-m03 minikube.k8s.io/updated_at=2024_08_26T11_05_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=ha-055395 minikube.k8s.io/primary=false
	I0826 11:05:59.765186  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-055395-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0826 11:05:59.896625  117024 start.go:319] duration metric: took 23.710285157s to joinCluster
	I0826 11:05:59.896731  117024 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:05:59.897065  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:05:59.898663  117024 out.go:177] * Verifying Kubernetes components...
	I0826 11:05:59.900463  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:06:00.184359  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:06:00.235461  117024 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:06:00.235832  117024 kapi.go:59] client config for ha-055395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key", CAFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0826 11:06:00.235932  117024 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.150:8443
	I0826 11:06:00.236244  117024 node_ready.go:35] waiting up to 6m0s for node "ha-055395-m03" to be "Ready" ...
	I0826 11:06:00.236339  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:00.236351  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:00.236362  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:00.236368  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:00.240278  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:00.736674  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:00.736703  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:00.736714  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:00.736719  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:00.740533  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:01.236867  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:01.236902  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:01.236913  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:01.236926  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:01.240745  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:01.736794  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:01.736818  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:01.736829  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:01.736833  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:01.740681  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:02.237262  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:02.237290  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:02.237298  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:02.237302  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:02.240458  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:02.240927  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:02.737107  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:02.737131  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:02.737140  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:02.737144  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:02.740759  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:03.237128  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:03.237155  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:03.237165  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:03.237169  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:03.240476  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:03.736450  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:03.736499  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:03.736511  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:03.736516  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:03.740617  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:04.237300  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:04.237326  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:04.237333  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:04.237337  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:04.240827  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:04.241583  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:04.737453  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:04.737482  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:04.737495  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:04.737503  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:04.740868  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:05.236500  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:05.236521  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:05.236530  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:05.236536  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:05.239881  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:05.737338  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:05.737363  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:05.737377  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:05.737382  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:05.740764  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:06.237354  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:06.237387  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:06.237401  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:06.237408  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:06.242710  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:06.243468  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:06.736774  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:06.736797  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:06.736806  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:06.736817  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:06.741224  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:07.236635  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:07.236671  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:07.236680  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:07.236685  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:07.240380  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:07.737503  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:07.737530  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:07.737539  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:07.737543  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:07.741193  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:08.237059  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:08.237082  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:08.237091  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:08.237095  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:08.240517  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:08.737441  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:08.737471  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:08.737481  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:08.737490  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:08.741670  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:08.742338  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:09.237100  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:09.237123  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:09.237131  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:09.237135  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:09.240486  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:09.737006  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:09.737038  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:09.737048  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:09.737055  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:09.740611  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:10.237065  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:10.237093  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:10.237104  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:10.237112  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:10.239977  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:10.736440  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:10.736464  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:10.736472  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:10.736476  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:10.740034  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:11.236461  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:11.236483  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:11.236492  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:11.236497  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:11.240157  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:11.240690  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:11.737095  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:11.737118  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:11.737126  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:11.737130  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:11.740781  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:12.237547  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:12.237574  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:12.237582  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:12.237586  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:12.241584  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:12.736582  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:12.736612  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:12.736622  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:12.736626  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:12.740044  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:13.236955  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:13.236984  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:13.236993  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:13.236997  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:13.240548  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:13.241222  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:13.736491  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:13.736516  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:13.736525  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:13.736530  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:13.739943  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:14.237178  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:14.237201  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:14.237210  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:14.237214  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:14.241129  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:14.736612  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:14.736642  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:14.736660  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:14.736667  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:14.740125  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:15.237206  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:15.237233  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:15.237245  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:15.237250  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:15.240870  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:15.241455  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:15.737334  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:15.737362  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:15.737370  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:15.737375  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:15.741177  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:16.236987  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:16.237012  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:16.237020  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:16.237024  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:16.240991  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:16.736852  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:16.736880  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:16.736888  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:16.736891  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:16.741118  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:17.236578  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:17.236605  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:17.236613  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:17.236616  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:17.240086  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:17.736956  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:17.736978  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:17.736987  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:17.736991  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:17.740431  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:17.741256  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:18.236564  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:18.236592  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.236601  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.236605  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.240062  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.240555  117024 node_ready.go:49] node "ha-055395-m03" has status "Ready":"True"
	I0826 11:06:18.240576  117024 node_ready.go:38] duration metric: took 18.004312905s for node "ha-055395-m03" to be "Ready" ...
	I0826 11:06:18.240586  117024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:06:18.240662  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:18.240672  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.240680  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.240685  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.247667  117024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0826 11:06:18.255049  117024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.255144  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-l9bd4
	I0826 11:06:18.255152  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.255160  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.255163  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.258174  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:18.258933  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:18.258956  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.258967  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.258975  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.261839  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:18.262337  117024 pod_ready.go:93] pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.262360  117024 pod_ready.go:82] duration metric: took 7.280488ms for pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.262374  117024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.262448  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nxb7s
	I0826 11:06:18.262459  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.262469  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.262475  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.268156  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:18.268916  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:18.268934  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.268941  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.268946  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.272031  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.272672  117024 pod_ready.go:93] pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.272696  117024 pod_ready.go:82] duration metric: took 10.313624ms for pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.272709  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.272790  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395
	I0826 11:06:18.272802  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.272820  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.272829  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.275976  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.276783  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:18.276798  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.276806  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.276811  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.279604  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:18.280422  117024 pod_ready.go:93] pod "etcd-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.280458  117024 pod_ready.go:82] duration metric: took 7.740578ms for pod "etcd-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.280474  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.280562  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395-m02
	I0826 11:06:18.280575  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.280588  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.280596  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.283900  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.284722  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:18.284736  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.284743  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.284747  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.287513  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:18.288091  117024 pod_ready.go:93] pod "etcd-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.288113  117024 pod_ready.go:82] duration metric: took 7.631105ms for pod "etcd-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.288123  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.437524  117024 request.go:632] Waited for 149.331839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395-m03
	I0826 11:06:18.437606  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395-m03
	I0826 11:06:18.437626  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.437635  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.437640  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.441585  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.636690  117024 request.go:632] Waited for 194.348676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:18.636773  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:18.636780  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.636791  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.636801  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.641895  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:18.642471  117024 pod_ready.go:93] pod "etcd-ha-055395-m03" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.642495  117024 pod_ready.go:82] duration metric: took 354.363726ms for pod "etcd-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.642518  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.836640  117024 request.go:632] Waited for 194.005829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395
	I0826 11:06:18.836727  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395
	I0826 11:06:18.836734  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.836746  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.836753  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.840987  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:19.037052  117024 request.go:632] Waited for 195.381707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:19.037122  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:19.037128  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.037135  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.037139  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.041035  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.041810  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:19.041848  117024 pod_ready.go:82] duration metric: took 399.304359ms for pod "kube-apiserver-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.041862  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.237476  117024 request.go:632] Waited for 195.524757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m02
	I0826 11:06:19.237541  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m02
	I0826 11:06:19.237546  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.237567  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.237571  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.241226  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.436657  117024 request.go:632] Waited for 194.288015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:19.436724  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:19.436729  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.436737  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.436742  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.440727  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.441435  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:19.441460  117024 pod_ready.go:82] duration metric: took 399.591361ms for pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.441478  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.637523  117024 request.go:632] Waited for 195.952664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m03
	I0826 11:06:19.637615  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m03
	I0826 11:06:19.637622  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.637630  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.637635  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.641332  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.836812  117024 request.go:632] Waited for 194.542228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:19.836894  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:19.836899  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.836909  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.836914  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.840756  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.841371  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395-m03" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:19.841396  117024 pod_ready.go:82] duration metric: took 399.909275ms for pod "kube-apiserver-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.841410  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.037378  117024 request.go:632] Waited for 195.879685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395
	I0826 11:06:20.037449  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395
	I0826 11:06:20.037455  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.037464  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.037468  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.041198  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:20.237157  117024 request.go:632] Waited for 195.361607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:20.237226  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:20.237232  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.237239  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.237243  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.240423  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:20.241263  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:20.241281  117024 pod_ready.go:82] duration metric: took 399.863521ms for pod "kube-controller-manager-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.241291  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.437154  117024 request.go:632] Waited for 195.764082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m02
	I0826 11:06:20.437232  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m02
	I0826 11:06:20.437240  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.437251  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.437257  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.441193  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:20.637545  117024 request.go:632] Waited for 195.425179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:20.637623  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:20.637629  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.637638  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.637643  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.641398  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:20.642370  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:20.642390  117024 pod_ready.go:82] duration metric: took 401.093186ms for pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.642400  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.837552  117024 request.go:632] Waited for 195.047341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m03
	I0826 11:06:20.837636  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m03
	I0826 11:06:20.837644  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.837656  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.837669  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.841552  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.036616  117024 request.go:632] Waited for 194.305096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:21.036711  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:21.036716  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.036725  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.036730  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.040195  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.040698  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395-m03" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:21.040719  117024 pod_ready.go:82] duration metric: took 398.313858ms for pod "kube-controller-manager-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.040730  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52vmd" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.237315  117024 request.go:632] Waited for 196.499841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52vmd
	I0826 11:06:21.237377  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52vmd
	I0826 11:06:21.237384  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.237395  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.237400  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.240742  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.436784  117024 request.go:632] Waited for 195.332846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:21.436868  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:21.436875  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.436886  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.436892  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.440708  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.441274  117024 pod_ready.go:93] pod "kube-proxy-52vmd" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:21.441294  117024 pod_ready.go:82] duration metric: took 400.557073ms for pod "kube-proxy-52vmd" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.441308  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g45pb" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.637557  117024 request.go:632] Waited for 196.170343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g45pb
	I0826 11:06:21.637645  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g45pb
	I0826 11:06:21.637651  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.637658  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.637661  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.642756  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:21.836985  117024 request.go:632] Waited for 193.407328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:21.837058  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:21.837066  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.837076  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.837085  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.840577  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.841247  117024 pod_ready.go:93] pod "kube-proxy-g45pb" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:21.841269  117024 pod_ready.go:82] duration metric: took 399.95227ms for pod "kube-proxy-g45pb" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.841279  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zl5bm" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.036701  117024 request.go:632] Waited for 195.350804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zl5bm
	I0826 11:06:22.036785  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zl5bm
	I0826 11:06:22.036790  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.036806  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.036824  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.040409  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:22.237617  117024 request.go:632] Waited for 196.424222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:22.237699  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:22.237706  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.237717  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.237722  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.241336  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:22.242092  117024 pod_ready.go:93] pod "kube-proxy-zl5bm" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:22.242112  117024 pod_ready.go:82] duration metric: took 400.82761ms for pod "kube-proxy-zl5bm" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.242122  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.437229  117024 request.go:632] Waited for 195.016866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395
	I0826 11:06:22.437295  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395
	I0826 11:06:22.437300  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.437308  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.437312  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.441030  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:22.637598  117024 request.go:632] Waited for 195.482711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:22.637676  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:22.637682  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.637689  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.637694  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.641467  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:22.642037  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:22.642054  117024 pod_ready.go:82] duration metric: took 399.926666ms for pod "kube-scheduler-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.642064  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.837339  117024 request.go:632] Waited for 195.191726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m02
	I0826 11:06:22.837410  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m02
	I0826 11:06:22.837415  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.837422  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.837427  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.841838  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:23.036722  117024 request.go:632] Waited for 194.282073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:23.036805  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:23.036811  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.036818  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.036826  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.040709  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:23.041522  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:23.041543  117024 pod_ready.go:82] duration metric: took 399.471152ms for pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:23.041559  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:23.237674  117024 request.go:632] Waited for 196.018809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m03
	I0826 11:06:23.237752  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m03
	I0826 11:06:23.237758  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.237766  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.237770  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.241372  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:23.437409  117024 request.go:632] Waited for 195.395835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:23.437486  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:23.437492  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.437506  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.437517  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.440863  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:23.441579  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395-m03" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:23.441604  117024 pod_ready.go:82] duration metric: took 400.03879ms for pod "kube-scheduler-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:23.441617  117024 pod_ready.go:39] duration metric: took 5.201013746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:06:23.441633  117024 api_server.go:52] waiting for apiserver process to appear ...
	I0826 11:06:23.441700  117024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:06:23.457907  117024 api_server.go:72] duration metric: took 23.561130355s to wait for apiserver process to appear ...
	I0826 11:06:23.457939  117024 api_server.go:88] waiting for apiserver healthz status ...
	I0826 11:06:23.457966  117024 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0826 11:06:23.462864  117024 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0826 11:06:23.462936  117024 round_trippers.go:463] GET https://192.168.39.150:8443/version
	I0826 11:06:23.462944  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.462952  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.462959  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.463914  117024 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0826 11:06:23.463974  117024 api_server.go:141] control plane version: v1.31.0
	I0826 11:06:23.463988  117024 api_server.go:131] duration metric: took 6.042713ms to wait for apiserver health ...
	I0826 11:06:23.463996  117024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 11:06:23.637440  117024 request.go:632] Waited for 173.339398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:23.637509  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:23.637515  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.637522  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.637526  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.644026  117024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0826 11:06:23.650289  117024 system_pods.go:59] 24 kube-system pods found
	I0826 11:06:23.650323  117024 system_pods.go:61] "coredns-6f6b679f8f-l9bd4" [087dd322-a382-40bc-b631-5744d64ee6b6] Running
	I0826 11:06:23.650328  117024 system_pods.go:61] "coredns-6f6b679f8f-nxb7s" [80b1f99e-a6b9-452f-9e21-b0df08325d56] Running
	I0826 11:06:23.650332  117024 system_pods.go:61] "etcd-ha-055395" [28419734-e4da-4ec0-a7db-0094855feac2] Running
	I0826 11:06:23.650335  117024 system_pods.go:61] "etcd-ha-055395-m02" [9ce0c9b5-4072-4ea1-b326-d7b8b78b578d] Running
	I0826 11:06:23.650338  117024 system_pods.go:61] "etcd-ha-055395-m03" [58ac0f4b-05b2-4304-9a5a-442c4ece6271] Running
	I0826 11:06:23.650341  117024 system_pods.go:61] "kindnet-js2cb" [3364fb33-1685-4137-a94a-b237b8ceb9c6] Running
	I0826 11:06:23.650344  117024 system_pods.go:61] "kindnet-wnz4m" [a1409b32-1fad-47e2-8c6e-97e2d0350e72] Running
	I0826 11:06:23.650347  117024 system_pods.go:61] "kindnet-z2rh2" [f1df8e80-62b7-4a0a-b61a-135b907c101d] Running
	I0826 11:06:23.650350  117024 system_pods.go:61] "kube-apiserver-ha-055395" [2bd78c6d-3ad6-4064-a59b-ade12f446056] Running
	I0826 11:06:23.650353  117024 system_pods.go:61] "kube-apiserver-ha-055395-m02" [9fbaba21-92b7-46e3-8840-9422e4206f59] Running
	I0826 11:06:23.650355  117024 system_pods.go:61] "kube-apiserver-ha-055395-m03" [4499f800-70e2-4864-8871-0f9cd30331b6] Running
	I0826 11:06:23.650358  117024 system_pods.go:61] "kube-controller-manager-ha-055395" [3fce2abe-e401-4c5b-8e0e-53c85390ac76] Running
	I0826 11:06:23.650362  117024 system_pods.go:61] "kube-controller-manager-ha-055395-m02" [4c9f6ebc-407a-4383-bf5f-0c91903ba213] Running
	I0826 11:06:23.650364  117024 system_pods.go:61] "kube-controller-manager-ha-055395-m03" [0e15ae3e-1330-4624-9c7d-019886111312] Running
	I0826 11:06:23.650367  117024 system_pods.go:61] "kube-proxy-52vmd" [3c3c5e99-eaf5-41ef-a319-de13b16b4936] Running
	I0826 11:06:23.650370  117024 system_pods.go:61] "kube-proxy-g45pb" [0e2dc897-60b1-4d06-a4e4-30136a39a224] Running
	I0826 11:06:23.650373  117024 system_pods.go:61] "kube-proxy-zl5bm" [bed428b3-57e8-4704-a1fd-b3db1b3e4d6c] Running
	I0826 11:06:23.650375  117024 system_pods.go:61] "kube-scheduler-ha-055395" [6ce30f64-767d-422b-8bf7-40ebc2179dcb] Running
	I0826 11:06:23.650378  117024 system_pods.go:61] "kube-scheduler-ha-055395-m02" [4d95a077-6a4d-4639-bb52-58b369107c66] Running
	I0826 11:06:23.650381  117024 system_pods.go:61] "kube-scheduler-ha-055395-m03" [c63e9b31-fade-466b-87a4-661fba5e0e61] Running
	I0826 11:06:23.650383  117024 system_pods.go:61] "kube-vip-ha-055395" [72a93d75-67e0-4605-81c3-f1ed830fd5eb] Running
	I0826 11:06:23.650386  117024 system_pods.go:61] "kube-vip-ha-055395-m02" [14132392-e3db-4ad5-b608-ed22e36d856b] Running
	I0826 11:06:23.650388  117024 system_pods.go:61] "kube-vip-ha-055395-m03" [7dc9fbef-3a6f-4570-8e14-e3bbe1e7cab7] Running
	I0826 11:06:23.650392  117024 system_pods.go:61] "storage-provisioner" [5bf3fea9-2562-4769-944b-72472da24419] Running
	I0826 11:06:23.650398  117024 system_pods.go:74] duration metric: took 186.396638ms to wait for pod list to return data ...
	I0826 11:06:23.650406  117024 default_sa.go:34] waiting for default service account to be created ...
	I0826 11:06:23.836798  117024 request.go:632] Waited for 186.304297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0826 11:06:23.836874  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0826 11:06:23.836880  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.836887  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.836892  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.841344  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:23.841466  117024 default_sa.go:45] found service account: "default"
	I0826 11:06:23.841479  117024 default_sa.go:55] duration metric: took 191.067398ms for default service account to be created ...
	I0826 11:06:23.841488  117024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 11:06:24.036961  117024 request.go:632] Waited for 195.394858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:24.037050  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:24.037058  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:24.037067  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:24.037073  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:24.042393  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:24.049055  117024 system_pods.go:86] 24 kube-system pods found
	I0826 11:06:24.049087  117024 system_pods.go:89] "coredns-6f6b679f8f-l9bd4" [087dd322-a382-40bc-b631-5744d64ee6b6] Running
	I0826 11:06:24.049096  117024 system_pods.go:89] "coredns-6f6b679f8f-nxb7s" [80b1f99e-a6b9-452f-9e21-b0df08325d56] Running
	I0826 11:06:24.049102  117024 system_pods.go:89] "etcd-ha-055395" [28419734-e4da-4ec0-a7db-0094855feac2] Running
	I0826 11:06:24.049108  117024 system_pods.go:89] "etcd-ha-055395-m02" [9ce0c9b5-4072-4ea1-b326-d7b8b78b578d] Running
	I0826 11:06:24.049114  117024 system_pods.go:89] "etcd-ha-055395-m03" [58ac0f4b-05b2-4304-9a5a-442c4ece6271] Running
	I0826 11:06:24.049119  117024 system_pods.go:89] "kindnet-js2cb" [3364fb33-1685-4137-a94a-b237b8ceb9c6] Running
	I0826 11:06:24.049124  117024 system_pods.go:89] "kindnet-wnz4m" [a1409b32-1fad-47e2-8c6e-97e2d0350e72] Running
	I0826 11:06:24.049129  117024 system_pods.go:89] "kindnet-z2rh2" [f1df8e80-62b7-4a0a-b61a-135b907c101d] Running
	I0826 11:06:24.049134  117024 system_pods.go:89] "kube-apiserver-ha-055395" [2bd78c6d-3ad6-4064-a59b-ade12f446056] Running
	I0826 11:06:24.049139  117024 system_pods.go:89] "kube-apiserver-ha-055395-m02" [9fbaba21-92b7-46e3-8840-9422e4206f59] Running
	I0826 11:06:24.049146  117024 system_pods.go:89] "kube-apiserver-ha-055395-m03" [4499f800-70e2-4864-8871-0f9cd30331b6] Running
	I0826 11:06:24.049151  117024 system_pods.go:89] "kube-controller-manager-ha-055395" [3fce2abe-e401-4c5b-8e0e-53c85390ac76] Running
	I0826 11:06:24.049158  117024 system_pods.go:89] "kube-controller-manager-ha-055395-m02" [4c9f6ebc-407a-4383-bf5f-0c91903ba213] Running
	I0826 11:06:24.049166  117024 system_pods.go:89] "kube-controller-manager-ha-055395-m03" [0e15ae3e-1330-4624-9c7d-019886111312] Running
	I0826 11:06:24.049175  117024 system_pods.go:89] "kube-proxy-52vmd" [3c3c5e99-eaf5-41ef-a319-de13b16b4936] Running
	I0826 11:06:24.049182  117024 system_pods.go:89] "kube-proxy-g45pb" [0e2dc897-60b1-4d06-a4e4-30136a39a224] Running
	I0826 11:06:24.049189  117024 system_pods.go:89] "kube-proxy-zl5bm" [bed428b3-57e8-4704-a1fd-b3db1b3e4d6c] Running
	I0826 11:06:24.049194  117024 system_pods.go:89] "kube-scheduler-ha-055395" [6ce30f64-767d-422b-8bf7-40ebc2179dcb] Running
	I0826 11:06:24.049200  117024 system_pods.go:89] "kube-scheduler-ha-055395-m02" [4d95a077-6a4d-4639-bb52-58b369107c66] Running
	I0826 11:06:24.049208  117024 system_pods.go:89] "kube-scheduler-ha-055395-m03" [c63e9b31-fade-466b-87a4-661fba5e0e61] Running
	I0826 11:06:24.049216  117024 system_pods.go:89] "kube-vip-ha-055395" [72a93d75-67e0-4605-81c3-f1ed830fd5eb] Running
	I0826 11:06:24.049224  117024 system_pods.go:89] "kube-vip-ha-055395-m02" [14132392-e3db-4ad5-b608-ed22e36d856b] Running
	I0826 11:06:24.049230  117024 system_pods.go:89] "kube-vip-ha-055395-m03" [7dc9fbef-3a6f-4570-8e14-e3bbe1e7cab7] Running
	I0826 11:06:24.049235  117024 system_pods.go:89] "storage-provisioner" [5bf3fea9-2562-4769-944b-72472da24419] Running
	I0826 11:06:24.049245  117024 system_pods.go:126] duration metric: took 207.750065ms to wait for k8s-apps to be running ...
	I0826 11:06:24.049259  117024 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 11:06:24.049317  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:06:24.065236  117024 system_svc.go:56] duration metric: took 15.963207ms WaitForService to wait for kubelet
	I0826 11:06:24.065277  117024 kubeadm.go:582] duration metric: took 24.168505094s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:06:24.065323  117024 node_conditions.go:102] verifying NodePressure condition ...
	I0826 11:06:24.237107  117024 request.go:632] Waited for 171.674022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes
	I0826 11:06:24.237166  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes
	I0826 11:06:24.237171  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:24.237178  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:24.237183  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:24.241375  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:24.242231  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:06:24.242251  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:06:24.242262  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:06:24.242265  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:06:24.242269  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:06:24.242272  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:06:24.242276  117024 node_conditions.go:105] duration metric: took 176.947306ms to run NodePressure ...
	I0826 11:06:24.242287  117024 start.go:241] waiting for startup goroutines ...
	I0826 11:06:24.242309  117024 start.go:255] writing updated cluster config ...
	I0826 11:06:24.242597  117024 ssh_runner.go:195] Run: rm -f paused
	I0826 11:06:24.297402  117024 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 11:06:24.299546  117024 out.go:177] * Done! kubectl is now configured to use "ha-055395" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.339994286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670605339969674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87f9a941-52ee-4a94-8b33-ac0455490368 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.340432323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0cde16d-560d-459f-8332-3157c4e36677 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.340482771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0cde16d-560d-459f-8332-3157c4e36677 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.340694708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670388552239950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252440950659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252404169206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307243c699fa9b66da2de1b5fdbd580fc20a97a961555faa4c916427517feeaf,PodSandboxId:21c0385083f3815307e2709e0449fdd9c00d8ed519a25e8f762488c338593aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724670252314333976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724670240453524239,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172467023
6587408693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e,PodSandboxId:862ecb4417c554988c653e82b6413ff1bd0b05dfb072e6ac7d1c74fccee090d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172467022741
5092475,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfb8a00dbd999308581413a12e69784,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670224881902594,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670224828948129,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b,PodSandboxId:bac675258d360620f9e642b72f7188ff9798375b5e377c44ca66f910838cf433,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670224758842886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5,PodSandboxId:3eb49d746b20e3f7254aa34c0a9686eb08fa5179c853e497fdabfde7fd3959fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670224743530164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0cde16d-560d-459f-8332-3157c4e36677 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.382062446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4640cdd8-2ea4-4f59-9bb6-1143d0561649 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.382162951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4640cdd8-2ea4-4f59-9bb6-1143d0561649 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.383955431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=424f787a-6279-4ce9-b61a-8f11bfe6cba5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.384527025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670605384500106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=424f787a-6279-4ce9-b61a-8f11bfe6cba5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.385354834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ee4da55-d44e-4601-929d-9a374c5f7779 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.385436345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ee4da55-d44e-4601-929d-9a374c5f7779 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.385798524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670388552239950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252440950659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252404169206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307243c699fa9b66da2de1b5fdbd580fc20a97a961555faa4c916427517feeaf,PodSandboxId:21c0385083f3815307e2709e0449fdd9c00d8ed519a25e8f762488c338593aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724670252314333976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724670240453524239,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172467023
6587408693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e,PodSandboxId:862ecb4417c554988c653e82b6413ff1bd0b05dfb072e6ac7d1c74fccee090d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172467022741
5092475,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfb8a00dbd999308581413a12e69784,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670224881902594,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670224828948129,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b,PodSandboxId:bac675258d360620f9e642b72f7188ff9798375b5e377c44ca66f910838cf433,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670224758842886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5,PodSandboxId:3eb49d746b20e3f7254aa34c0a9686eb08fa5179c853e497fdabfde7fd3959fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670224743530164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ee4da55-d44e-4601-929d-9a374c5f7779 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.422563387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6eb0188-c394-4094-b394-5fe62dc0c6a1 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.422645350Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6eb0188-c394-4094-b394-5fe62dc0c6a1 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.423686025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fddafb2c-c2c8-414b-af82-e0a6067f00ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.424157189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670605424135947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fddafb2c-c2c8-414b-af82-e0a6067f00ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.424780141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4acae6b-7df9-4d46-93af-ba74514b6c44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.424838562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4acae6b-7df9-4d46-93af-ba74514b6c44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.425070215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670388552239950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252440950659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252404169206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307243c699fa9b66da2de1b5fdbd580fc20a97a961555faa4c916427517feeaf,PodSandboxId:21c0385083f3815307e2709e0449fdd9c00d8ed519a25e8f762488c338593aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724670252314333976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724670240453524239,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172467023
6587408693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e,PodSandboxId:862ecb4417c554988c653e82b6413ff1bd0b05dfb072e6ac7d1c74fccee090d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172467022741
5092475,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfb8a00dbd999308581413a12e69784,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670224881902594,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670224828948129,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b,PodSandboxId:bac675258d360620f9e642b72f7188ff9798375b5e377c44ca66f910838cf433,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670224758842886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5,PodSandboxId:3eb49d746b20e3f7254aa34c0a9686eb08fa5179c853e497fdabfde7fd3959fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670224743530164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4acae6b-7df9-4d46-93af-ba74514b6c44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.466510242Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f909de57-8f02-4830-b392-478b00617bd0 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.466589883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f909de57-8f02-4830-b392-478b00617bd0 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.467951825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30e820ac-86d6-4168-bdb8-a7f0f6ea4020 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.468672621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670605468618141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30e820ac-86d6-4168-bdb8-a7f0f6ea4020 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.469498122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=093917b9-4123-445b-a5b5-0a62526edab5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.469569448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=093917b9-4123-445b-a5b5-0a62526edab5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:10:05 ha-055395 crio[676]: time="2024-08-26 11:10:05.469878224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670388552239950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252440950659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252404169206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307243c699fa9b66da2de1b5fdbd580fc20a97a961555faa4c916427517feeaf,PodSandboxId:21c0385083f3815307e2709e0449fdd9c00d8ed519a25e8f762488c338593aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724670252314333976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724670240453524239,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172467023
6587408693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e,PodSandboxId:862ecb4417c554988c653e82b6413ff1bd0b05dfb072e6ac7d1c74fccee090d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172467022741
5092475,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfb8a00dbd999308581413a12e69784,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670224881902594,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670224828948129,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b,PodSandboxId:bac675258d360620f9e642b72f7188ff9798375b5e377c44ca66f910838cf433,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670224758842886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5,PodSandboxId:3eb49d746b20e3f7254aa34c0a9686eb08fa5179c853e497fdabfde7fd3959fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670224743530164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=093917b9-4123-445b-a5b5-0a62526edab5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2f106e1bd830       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a356e619a2186       busybox-7dff88458-xh6vw
	588201165ca01       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   73e7528d83ce5       coredns-6f6b679f8f-nxb7s
	9fdad1c79bb41       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   3593c2f74b608       coredns-6f6b679f8f-l9bd4
	307243c699fa9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   21c0385083f38       storage-provisioner
	d5ffe25b55c8a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   3f092331272f7       kindnet-z2rh2
	4518376ec7b4a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   dd6c20478efce       kube-proxy-g45pb
	d4490a4c3fa0b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   862ecb4417c55       kube-vip-ha-055395
	9f71e1964ec11       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   d03f237462672       kube-scheduler-ha-055395
	9500eb08ad452       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   40a84124456a3       etcd-ha-055395
	bcd57c7d0ba05       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   bac675258d360       kube-controller-manager-ha-055395
	37bbfc44887fa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   3eb49d746b20e       kube-apiserver-ha-055395
	
	
	==> coredns [588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9] <==
	[INFO] 10.244.1.2:59222 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002106896s
	[INFO] 10.244.1.2:42031 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136805s
	[INFO] 10.244.1.2:48240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195092s
	[INFO] 10.244.1.2:39354 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001428637s
	[INFO] 10.244.1.2:38981 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143058s
	[INFO] 10.244.1.2:42169 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00025738s
	[INFO] 10.244.0.4:39980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128242s
	[INFO] 10.244.0.4:57380 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001955064s
	[INFO] 10.244.0.4:60257 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001538811s
	[INFO] 10.244.0.4:60079 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000036286s
	[INFO] 10.244.0.4:50624 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103476s
	[INFO] 10.244.0.4:46611 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034392s
	[INFO] 10.244.3.2:52234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158227s
	[INFO] 10.244.3.2:51370 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133305s
	[INFO] 10.244.3.2:40430 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145691s
	[INFO] 10.244.3.2:50269 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000221739s
	[INFO] 10.244.1.2:49573 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010888s
	[INFO] 10.244.0.4:49284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199959s
	[INFO] 10.244.3.2:38694 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112066s
	[INFO] 10.244.3.2:55559 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116423s
	[INFO] 10.244.1.2:38712 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000274813s
	[INFO] 10.244.1.2:38536 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091302s
	[INFO] 10.244.0.4:35805 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089054s
	[INFO] 10.244.0.4:53560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109072s
	[INFO] 10.244.0.4:50886 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061358s
	
	
	==> coredns [9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e] <==
	[INFO] 127.0.0.1:33483 - 35199 "HINFO IN 6318060826605411215.8532303163548737398. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011519809s
	[INFO] 10.244.3.2:42757 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011759911s
	[INFO] 10.244.0.4:48529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225855s
	[INFO] 10.244.0.4:39187 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001675593s
	[INFO] 10.244.0.4:36731 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000357451s
	[INFO] 10.244.0.4:57644 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001630694s
	[INFO] 10.244.3.2:35262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118796s
	[INFO] 10.244.3.2:56831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004452512s
	[INFO] 10.244.3.2:50141 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195651s
	[INFO] 10.244.3.2:52724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157926s
	[INFO] 10.244.3.2:48168 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135307s
	[INFO] 10.244.1.2:49021 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099106s
	[INFO] 10.244.0.4:33653 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173931s
	[INFO] 10.244.0.4:49095 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089973s
	[INFO] 10.244.1.2:60072 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132366s
	[INFO] 10.244.1.2:45712 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081817s
	[INFO] 10.244.1.2:47110 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082159s
	[INFO] 10.244.0.4:48619 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100933s
	[INFO] 10.244.0.4:37358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069397s
	[INFO] 10.244.0.4:46981 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092796s
	[INFO] 10.244.3.2:59777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240921s
	[INFO] 10.244.3.2:44319 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002198s
	[INFO] 10.244.1.2:48438 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216864s
	[INFO] 10.244.1.2:45176 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133331s
	[INFO] 10.244.0.4:41108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112163s
	
	
	==> describe nodes <==
	Name:               ha-055395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_03_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:03:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:09:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:06:54 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:06:54 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:06:54 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:06:54 +0000   Mon, 26 Aug 2024 11:04:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-055395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 68841a7ef08f47a386553bd433710191
	  System UUID:                68841a7e-f08f-47a3-8655-3bd433710191
	  Boot ID:                    be93c222-ff08-41d5-baae-cb87ba3b44cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xh6vw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-6f6b679f8f-l9bd4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 coredns-6f6b679f8f-nxb7s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 etcd-ha-055395                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m14s
	  kube-system                 kindnet-z2rh2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-055395             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-controller-manager-ha-055395    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-proxy-g45pb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-055395             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-055395                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s  kubelet          Node ha-055395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s  kubelet          Node ha-055395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s  kubelet          Node ha-055395 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal  NodeReady                5m54s  kubelet          Node ha-055395 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal  RegisteredNode           4m1s   node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	
	
	Name:               ha-055395-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_04_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:04:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:07:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 26 Aug 2024 11:06:48 +0000   Mon, 26 Aug 2024 11:08:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 26 Aug 2024 11:06:48 +0000   Mon, 26 Aug 2024 11:08:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 26 Aug 2024 11:06:48 +0000   Mon, 26 Aug 2024 11:08:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 26 Aug 2024 11:06:48 +0000   Mon, 26 Aug 2024 11:08:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ha-055395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9151de0e0e3545e983307f4ed75379a4
	  System UUID:                9151de0e-0e35-45e9-8330-7f4ed75379a4
	  Boot ID:                    4303fdb0-210c-4d93-9956-aae5fab451d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gbwm6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-055395-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m18s
	  kube-system                 kindnet-js2cb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m20s
	  kube-system                 kube-apiserver-ha-055395-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-055395-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-zl5bm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-scheduler-ha-055395-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-055395-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m20s                  cidrAllocator    Node ha-055395-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-055395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-055395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-055395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-055395-m02 status is now: NodeNotReady
	
	
	Name:               ha-055395-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_05_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:05:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:10:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:06:57 +0000   Mon, 26 Aug 2024 11:05:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:06:57 +0000   Mon, 26 Aug 2024 11:05:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:06:57 +0000   Mon, 26 Aug 2024 11:05:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:06:57 +0000   Mon, 26 Aug 2024 11:06:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    ha-055395-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 85be43a1fb394f4ea22aa7e3674c88fc
	  System UUID:                85be43a1-fb39-4f4e-a22a-a7e3674c88fc
	  Boot ID:                    f1c6fea4-515c-4231-b6c3-f318551247cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8cc92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-055395-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-wnz4m                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-055395-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-055395-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-52vmd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-055395-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-vip-ha-055395-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     4m9s                 cidrAllocator    Node ha-055395-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-055395-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-055395-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-055395-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	
	
	Name:               ha-055395-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_07_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:07:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:09:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:07:34 +0000   Mon, 26 Aug 2024 11:07:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:07:34 +0000   Mon, 26 Aug 2024 11:07:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:07:34 +0000   Mon, 26 Aug 2024 11:07:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:07:34 +0000   Mon, 26 Aug 2024 11:07:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-055395-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fad8927c4194cf6a2bc5a5e286dfbd0
	  System UUID:                0fad8927-c419-4cf6-a2bc-5a5e286dfbd0
	  Boot ID:                    be3015bb-1b6c-4cf5-9b0d-dc467942896c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-n4gpg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-758wf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  CIDRAssignmentFailed     3m2s                 cidrAllocator    Node ha-055395-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal  CIDRAssignmentFailed     3m2s                 cidrAllocator    Node ha-055395-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-055395-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-055395-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-055395-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-055395-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug26 11:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050670] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038233] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769390] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.925041] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.551281] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.796386] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.063641] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061452] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.165458] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.147926] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.278562] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.051395] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.884243] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.058746] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.395019] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.102683] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.458120] kauditd_printk_skb: 21 callbacks suppressed
	[Aug26 11:04] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.777933] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5] <==
	{"level":"warn","ts":"2024-08-26T11:10:05.751243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.755928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.766095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.773529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.779300Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.785901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.790396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.793482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.799817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.807164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.813406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.817031Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.820860Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.828978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.834946Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.839673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.841832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.845165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.847983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.852242Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.858358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.865160Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.873116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:10:05.901295Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"beae49225677c4e6","rtt":"8.818377ms","error":"dial tcp 192.168.39.55:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-26T11:10:05.901383Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"beae49225677c4e6","rtt":"979.73µs","error":"dial tcp 192.168.39.55:2380: i/o timeout"}
	
	
	==> kernel <==
	 11:10:05 up 6 min,  0 users,  load average: 0.23, 0.28, 0.12
	Linux ha-055395 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8] <==
	I0826 11:09:31.533422       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:09:41.525897       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:09:41.526032       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:09:41.526290       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:09:41.526337       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:09:41.526451       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:09:41.526481       1 main.go:299] handling current node
	I0826 11:09:41.526524       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:09:41.526549       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:09:51.532385       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:09:51.532619       1 main.go:299] handling current node
	I0826 11:09:51.532690       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:09:51.532712       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:09:51.532996       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:09:51.533018       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:09:51.533099       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:09:51.533123       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:10:01.524117       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:10:01.524196       1 main.go:299] handling current node
	I0826 11:10:01.524225       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:10:01.524234       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:10:01.524384       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:10:01.524404       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:10:01.524468       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:10:01.524486       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5] <==
	I0826 11:03:50.254958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0826 11:03:51.114973       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0826 11:03:51.138659       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0826 11:03:51.258113       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0826 11:03:55.201433       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0826 11:03:55.964408       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0826 11:05:57.302336       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 18.506µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0826 11:05:57.302422       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="380d2234-45a2-4699-acab-203701593ddb"
	E0826 11:05:57.302490       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.295µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0826 11:06:29.448626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57834: use of closed network connection
	E0826 11:06:29.636511       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57848: use of closed network connection
	E0826 11:06:29.834954       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57880: use of closed network connection
	E0826 11:06:30.035184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57902: use of closed network connection
	E0826 11:06:30.229312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57916: use of closed network connection
	E0826 11:06:30.423379       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57936: use of closed network connection
	E0826 11:06:30.617088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57964: use of closed network connection
	E0826 11:06:30.809409       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34382: use of closed network connection
	E0826 11:06:30.994954       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34392: use of closed network connection
	E0826 11:06:31.301280       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34408: use of closed network connection
	E0826 11:06:31.477594       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34426: use of closed network connection
	E0826 11:06:31.666522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34446: use of closed network connection
	E0826 11:06:31.844190       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34464: use of closed network connection
	E0826 11:06:32.044390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34486: use of closed network connection
	E0826 11:06:32.234076       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34508: use of closed network connection
	W0826 11:08:00.066257       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.209]
	
	
	==> kube-controller-manager [bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b] <==
	E0826 11:07:03.663422       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-055395-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-055395-m04"
	E0826 11:07:03.663538       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-055395-m04': failed to patch node CIDR: Node \"ha-055395-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0826 11:07:03.663685       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:03.669413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:03.746900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:03.797582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:04.180240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:04.890414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:04.952434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:05.067583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:05.068067       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-055395-m04"
	I0826 11:07:05.157653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:13.770341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:24.076215       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-055395-m04"
	I0826 11:07:24.077165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:24.093855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:24.911620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:34.105500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:08:19.937723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	I0826 11:08:19.938491       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-055395-m04"
	I0826 11:08:19.968522       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	I0826 11:08:20.124907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.7478ms"
	I0826 11:08:20.125223       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.072µs"
	I0826 11:08:20.151714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	I0826 11:08:25.192443       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	
	
	==> kube-proxy [4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 11:03:56.913791       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 11:03:56.925813       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0826 11:03:56.926041       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 11:03:56.969129       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 11:03:56.969172       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 11:03:56.969202       1 server_linux.go:169] "Using iptables Proxier"
	I0826 11:03:56.971710       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 11:03:56.972143       1 server.go:483] "Version info" version="v1.31.0"
	I0826 11:03:56.972290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:03:56.973983       1 config.go:197] "Starting service config controller"
	I0826 11:03:56.974108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 11:03:56.974188       1 config.go:104] "Starting endpoint slice config controller"
	I0826 11:03:56.974206       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 11:03:56.974826       1 config.go:326] "Starting node config controller"
	I0826 11:03:56.976097       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 11:03:57.075115       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 11:03:57.075149       1 shared_informer.go:320] Caches are synced for service config
	I0826 11:03:57.076414       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3] <==
	I0826 11:05:56.667408       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mnslg" node="ha-055395-m03"
	E0826 11:06:25.317284       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xh6vw\": pod busybox-7dff88458-xh6vw is already assigned to node \"ha-055395\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xh6vw" node="ha-055395"
	E0826 11:06:25.317379       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 94adba85-441f-40d9-bcf2-616b1bd587dc(default/busybox-7dff88458-xh6vw) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xh6vw"
	E0826 11:06:25.317401       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xh6vw\": pod busybox-7dff88458-xh6vw is already assigned to node \"ha-055395\"" pod="default/busybox-7dff88458-xh6vw"
	I0826 11:06:25.317473       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xh6vw" node="ha-055395"
	E0826 11:07:03.627443       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-758wf\": pod kube-proxy-758wf is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-758wf" node="ha-055395-m04"
	E0826 11:07:03.627698       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-758wf\": pod kube-proxy-758wf is already assigned to node \"ha-055395-m04\"" pod="kube-system/kube-proxy-758wf"
	E0826 11:07:03.630860       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n4gpg\": pod kindnet-n4gpg is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-n4gpg" node="ha-055395-m04"
	E0826 11:07:03.630950       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n4gpg\": pod kindnet-n4gpg is already assigned to node \"ha-055395-m04\"" pod="kube-system/kindnet-n4gpg"
	E0826 11:07:03.708033       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xdd9l\": pod kindnet-xdd9l is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xdd9l" node="ha-055395-m04"
	E0826 11:07:03.708220       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-c8476\": pod kube-proxy-c8476 is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-c8476" node="ha-055395-m04"
	E0826 11:07:03.708278       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 21e322d3-c564-4ec6-b66b-e86860280682(kube-system/kube-proxy-c8476) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-c8476"
	E0826 11:07:03.708304       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-c8476\": pod kube-proxy-c8476 is already assigned to node \"ha-055395-m04\"" pod="kube-system/kube-proxy-c8476"
	I0826 11:07:03.708325       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-c8476" node="ha-055395-m04"
	E0826 11:07:03.708436       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a8f1119a-bc1c-46d9-91fd-76553c71f1ff(kube-system/kindnet-xdd9l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-xdd9l"
	E0826 11:07:03.708516       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xdd9l\": pod kindnet-xdd9l is already assigned to node \"ha-055395-m04\"" pod="kube-system/kindnet-xdd9l"
	I0826 11:07:03.708579       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-xdd9l" node="ha-055395-m04"
	E0826 11:07:03.708838       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kkwxm\": pod kube-proxy-kkwxm is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kkwxm" node="ha-055395-m04"
	E0826 11:07:03.708887       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2ef2b044-3278-43d7-8164-a8b51d7f9424(kube-system/kube-proxy-kkwxm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-kkwxm"
	E0826 11:07:03.708901       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kkwxm\": pod kube-proxy-kkwxm is already assigned to node \"ha-055395-m04\"" pod="kube-system/kube-proxy-kkwxm"
	I0826 11:07:03.708919       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kkwxm" node="ha-055395-m04"
	E0826 11:07:03.709603       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ww4xl\": pod kindnet-ww4xl is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ww4xl" node="ha-055395-m04"
	E0826 11:07:03.711019       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 45edff34-de36-493a-9dba-b74e8a326787(kube-system/kindnet-ww4xl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ww4xl"
	E0826 11:07:03.711136       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ww4xl\": pod kindnet-ww4xl is already assigned to node \"ha-055395-m04\"" pod="kube-system/kindnet-ww4xl"
	I0826 11:07:03.711360       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ww4xl" node="ha-055395-m04"
	
	
	==> kubelet <==
	Aug 26 11:08:51 ha-055395 kubelet[1329]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:08:51 ha-055395 kubelet[1329]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:08:51 ha-055395 kubelet[1329]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:08:51 ha-055395 kubelet[1329]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:08:51 ha-055395 kubelet[1329]: E0826 11:08:51.388061    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670531387562600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:08:51 ha-055395 kubelet[1329]: E0826 11:08:51.388118    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670531387562600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:01 ha-055395 kubelet[1329]: E0826 11:09:01.390849    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670541389818812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:01 ha-055395 kubelet[1329]: E0826 11:09:01.390954    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670541389818812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:11 ha-055395 kubelet[1329]: E0826 11:09:11.393831    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670551393287084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:11 ha-055395 kubelet[1329]: E0826 11:09:11.393915    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670551393287084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:21 ha-055395 kubelet[1329]: E0826 11:09:21.395315    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670561395051641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:21 ha-055395 kubelet[1329]: E0826 11:09:21.395351    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670561395051641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:31 ha-055395 kubelet[1329]: E0826 11:09:31.397417    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670571396732023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:31 ha-055395 kubelet[1329]: E0826 11:09:31.397441    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670571396732023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:41 ha-055395 kubelet[1329]: E0826 11:09:41.399596    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670581399032309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:41 ha-055395 kubelet[1329]: E0826 11:09:41.399636    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670581399032309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:51 ha-055395 kubelet[1329]: E0826 11:09:51.281017    1329 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 11:09:51 ha-055395 kubelet[1329]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:09:51 ha-055395 kubelet[1329]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:09:51 ha-055395 kubelet[1329]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:09:51 ha-055395 kubelet[1329]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:09:51 ha-055395 kubelet[1329]: E0826 11:09:51.401819    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670591401413183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:51 ha-055395 kubelet[1329]: E0826 11:09:51.401859    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670591401413183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:01 ha-055395 kubelet[1329]: E0826 11:10:01.404157    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670601403727960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:01 ha-055395 kubelet[1329]: E0826 11:10:01.404209    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670601403727960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-055395 -n ha-055395
helpers_test.go:261: (dbg) Run:  kubectl --context ha-055395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 3 (3.201130885s)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-055395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:10:10.500074  121787 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:10:10.500327  121787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:10.500348  121787 out.go:358] Setting ErrFile to fd 2...
	I0826 11:10:10.500358  121787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:10.500544  121787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:10:10.500729  121787 out.go:352] Setting JSON to false
	I0826 11:10:10.500763  121787 mustload.go:65] Loading cluster: ha-055395
	I0826 11:10:10.500877  121787 notify.go:220] Checking for updates...
	I0826 11:10:10.501244  121787 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:10:10.501265  121787 status.go:255] checking status of ha-055395 ...
	I0826 11:10:10.501760  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:10.501838  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:10.518419  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38755
	I0826 11:10:10.519076  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:10.519744  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:10.519770  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:10.520144  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:10.520344  121787 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:10:10.522036  121787 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:10:10.522058  121787 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:10.522501  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:10.522552  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:10.539019  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0826 11:10:10.539495  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:10.539996  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:10.540017  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:10.540401  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:10.540606  121787 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:10:10.543698  121787 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:10.544102  121787 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:10.544135  121787 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:10.544287  121787 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:10.544797  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:10.544882  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:10.561077  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
	I0826 11:10:10.561540  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:10.562137  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:10.562159  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:10.562520  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:10.562744  121787 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:10:10.562972  121787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:10.563002  121787 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:10:10.566517  121787 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:10.566994  121787 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:10.567016  121787 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:10.567338  121787 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:10:10.567540  121787 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:10:10.567731  121787 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:10:10.567889  121787 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:10:10.647973  121787 ssh_runner.go:195] Run: systemctl --version
	I0826 11:10:10.655744  121787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:10.674547  121787 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:10.674584  121787 api_server.go:166] Checking apiserver status ...
	I0826 11:10:10.674626  121787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:10.691037  121787 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0826 11:10:10.702528  121787 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:10.702586  121787 ssh_runner.go:195] Run: ls
	I0826 11:10:10.708765  121787 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:10.716132  121787 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:10.716167  121787 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:10:10.716180  121787 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:10.716205  121787 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:10:10.716658  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:10.716711  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:10.732052  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I0826 11:10:10.732470  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:10.732927  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:10.732950  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:10.733289  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:10.733507  121787 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:10:10.735182  121787 status.go:330] ha-055395-m02 host status = "Running" (err=<nil>)
	I0826 11:10:10.735200  121787 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:10.735468  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:10.735503  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:10.751101  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I0826 11:10:10.751514  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:10.752017  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:10.752042  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:10.752338  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:10.752579  121787 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:10:10.755775  121787 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:10.756195  121787 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:10.756225  121787 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:10.756366  121787 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:10.756698  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:10.756744  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:10.773315  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39669
	I0826 11:10:10.773791  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:10.774235  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:10.774261  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:10.774587  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:10.774943  121787 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:10:10.775202  121787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:10.775227  121787 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:10:10.778140  121787 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:10.778608  121787 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:10.778637  121787 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:10.778849  121787 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:10:10.779011  121787 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:10:10.779129  121787 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:10:10.779241  121787 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	W0826 11:10:13.295226  121787 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:13.295316  121787 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0826 11:10:13.295332  121787 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:13.295338  121787 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 11:10:13.295356  121787 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:13.295363  121787 status.go:255] checking status of ha-055395-m03 ...
	I0826 11:10:13.295680  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:13.295708  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:13.311567  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46237
	I0826 11:10:13.312039  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:13.312485  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:13.312510  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:13.312841  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:13.313067  121787 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:10:13.314825  121787 status.go:330] ha-055395-m03 host status = "Running" (err=<nil>)
	I0826 11:10:13.314857  121787 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:13.315262  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:13.315297  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:13.331147  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I0826 11:10:13.331591  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:13.332093  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:13.332123  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:13.332439  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:13.332662  121787 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:10:13.335579  121787 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:13.336007  121787 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:13.336035  121787 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:13.336144  121787 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:13.336479  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:13.336523  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:13.352399  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41311
	I0826 11:10:13.352904  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:13.353371  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:13.353403  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:13.353842  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:13.354029  121787 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:10:13.354241  121787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:13.354266  121787 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:10:13.357287  121787 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:13.357783  121787 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:13.357818  121787 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:13.358018  121787 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:10:13.358217  121787 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:10:13.358390  121787 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:10:13.358529  121787 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:10:13.441982  121787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:13.457470  121787 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:13.457511  121787 api_server.go:166] Checking apiserver status ...
	I0826 11:10:13.457561  121787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:13.470953  121787 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	W0826 11:10:13.480926  121787 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:13.480984  121787 ssh_runner.go:195] Run: ls
	I0826 11:10:13.485096  121787 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:13.492273  121787 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:13.492316  121787 status.go:422] ha-055395-m03 apiserver status = Running (err=<nil>)
	I0826 11:10:13.492326  121787 status.go:257] ha-055395-m03 status: &{Name:ha-055395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:13.492343  121787 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:10:13.492686  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:13.492716  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:13.508176  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I0826 11:10:13.508618  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:13.509163  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:13.509190  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:13.509483  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:13.509675  121787 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:10:13.511362  121787 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:10:13.511379  121787 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:13.511692  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:13.511720  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:13.527059  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40069
	I0826 11:10:13.527542  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:13.528066  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:13.528086  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:13.528430  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:13.528645  121787 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:10:13.531305  121787 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:13.531731  121787 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:13.531757  121787 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:13.531931  121787 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:13.532218  121787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:13.532241  121787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:13.547758  121787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I0826 11:10:13.548173  121787 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:13.548653  121787 main.go:141] libmachine: Using API Version  1
	I0826 11:10:13.548676  121787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:13.548987  121787 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:13.549167  121787 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:10:13.549371  121787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:13.549395  121787 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:10:13.552212  121787 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:13.552624  121787 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:13.552657  121787 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:13.552838  121787 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:10:13.553024  121787 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:10:13.553186  121787 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:10:13.553352  121787 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:10:13.638207  121787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:13.653550  121787 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 3 (5.095331577s)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-055395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:10:15.077202  121888 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:10:15.077487  121888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:15.077498  121888 out.go:358] Setting ErrFile to fd 2...
	I0826 11:10:15.077503  121888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:15.077691  121888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:10:15.077897  121888 out.go:352] Setting JSON to false
	I0826 11:10:15.077933  121888 mustload.go:65] Loading cluster: ha-055395
	I0826 11:10:15.078035  121888 notify.go:220] Checking for updates...
	I0826 11:10:15.078306  121888 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:10:15.078322  121888 status.go:255] checking status of ha-055395 ...
	I0826 11:10:15.078949  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:15.079003  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:15.098296  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0826 11:10:15.098857  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:15.099544  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:15.099577  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:15.099952  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:15.100213  121888 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:10:15.102063  121888 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:10:15.102087  121888 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:15.102522  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:15.102573  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:15.118805  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0826 11:10:15.119298  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:15.119850  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:15.119878  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:15.120360  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:15.120580  121888 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:10:15.123696  121888 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:15.124214  121888 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:15.124248  121888 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:15.124949  121888 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:15.125403  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:15.125476  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:15.141533  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0826 11:10:15.142128  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:15.142689  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:15.142716  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:15.143153  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:15.143328  121888 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:10:15.143527  121888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:15.143559  121888 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:10:15.146690  121888 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:15.147236  121888 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:15.147277  121888 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:15.147417  121888 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:10:15.147651  121888 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:10:15.147801  121888 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:10:15.147928  121888 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:10:15.234477  121888 ssh_runner.go:195] Run: systemctl --version
	I0826 11:10:15.241014  121888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:15.257176  121888 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:15.257220  121888 api_server.go:166] Checking apiserver status ...
	I0826 11:10:15.257260  121888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:15.276102  121888 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0826 11:10:15.292318  121888 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:15.292373  121888 ssh_runner.go:195] Run: ls
	I0826 11:10:15.297597  121888 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:15.302439  121888 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:15.302471  121888 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:10:15.302485  121888 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:15.302517  121888 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:10:15.302961  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:15.303013  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:15.319270  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I0826 11:10:15.319762  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:15.320299  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:15.320328  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:15.320661  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:15.320884  121888 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:10:15.322613  121888 status.go:330] ha-055395-m02 host status = "Running" (err=<nil>)
	I0826 11:10:15.322637  121888 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:15.323024  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:15.323069  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:15.339270  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I0826 11:10:15.339760  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:15.340309  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:15.340341  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:15.340705  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:15.340954  121888 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:10:15.344277  121888 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:15.344845  121888 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:15.344881  121888 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:15.345204  121888 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:15.345673  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:15.345722  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:15.368173  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0826 11:10:15.368757  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:15.369333  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:15.369370  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:15.369703  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:15.369911  121888 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:10:15.370091  121888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:15.370109  121888 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:10:15.373306  121888 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:15.373738  121888 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:15.373769  121888 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:15.373953  121888 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:10:15.374160  121888 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:10:15.374294  121888 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:10:15.374435  121888 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	W0826 11:10:16.371133  121888 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:16.371187  121888 retry.go:31] will retry after 318.103805ms: dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:19.759174  121888 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:19.759273  121888 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0826 11:10:19.759297  121888 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:19.759309  121888 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 11:10:19.759339  121888 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:19.759353  121888 status.go:255] checking status of ha-055395-m03 ...
	I0826 11:10:19.759712  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:19.759749  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:19.775537  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41509
	I0826 11:10:19.776041  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:19.776628  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:19.776655  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:19.776969  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:19.777158  121888 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:10:19.778616  121888 status.go:330] ha-055395-m03 host status = "Running" (err=<nil>)
	I0826 11:10:19.778633  121888 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:19.778996  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:19.779030  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:19.795681  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0826 11:10:19.796254  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:19.796731  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:19.796762  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:19.797122  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:19.797372  121888 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:10:19.800626  121888 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:19.801179  121888 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:19.801217  121888 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:19.801462  121888 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:19.801801  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:19.801851  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:19.818505  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44755
	I0826 11:10:19.819110  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:19.819597  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:19.819621  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:19.819988  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:19.820224  121888 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:10:19.820417  121888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:19.820438  121888 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:10:19.823744  121888 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:19.824361  121888 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:19.824401  121888 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:19.824570  121888 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:10:19.824768  121888 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:10:19.824935  121888 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:10:19.825078  121888 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:10:19.909958  121888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:19.924685  121888 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:19.924717  121888 api_server.go:166] Checking apiserver status ...
	I0826 11:10:19.924756  121888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:19.939644  121888 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	W0826 11:10:19.954023  121888 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:19.954084  121888 ssh_runner.go:195] Run: ls
	I0826 11:10:19.958710  121888 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:19.963702  121888 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:19.963724  121888 status.go:422] ha-055395-m03 apiserver status = Running (err=<nil>)
	I0826 11:10:19.963734  121888 status.go:257] ha-055395-m03 status: &{Name:ha-055395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:19.963750  121888 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:10:19.964100  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:19.964125  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:19.980700  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I0826 11:10:19.981141  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:19.981530  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:19.981552  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:19.981823  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:19.982029  121888 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:10:19.983640  121888 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:10:19.983679  121888 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:19.983999  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:19.984044  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:20.000766  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0826 11:10:20.001188  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:20.001787  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:20.001818  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:20.002152  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:20.002341  121888 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:10:20.005336  121888 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:20.005782  121888 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:20.005842  121888 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:20.005942  121888 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:20.006224  121888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:20.006268  121888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:20.022158  121888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0826 11:10:20.022661  121888 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:20.023244  121888 main.go:141] libmachine: Using API Version  1
	I0826 11:10:20.023274  121888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:20.023650  121888 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:20.023842  121888 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:10:20.024095  121888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:20.024114  121888 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:10:20.027447  121888 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:20.027877  121888 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:20.027913  121888 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:20.028057  121888 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:10:20.028224  121888 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:10:20.028352  121888 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:10:20.028517  121888 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:10:20.110027  121888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:20.123662  121888 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 3 (4.607074078s)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-055395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:10:21.906086  122005 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:10:21.906198  122005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:21.906203  122005 out.go:358] Setting ErrFile to fd 2...
	I0826 11:10:21.906208  122005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:21.906383  122005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:10:21.906550  122005 out.go:352] Setting JSON to false
	I0826 11:10:21.906578  122005 mustload.go:65] Loading cluster: ha-055395
	I0826 11:10:21.906639  122005 notify.go:220] Checking for updates...
	I0826 11:10:21.907012  122005 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:10:21.907030  122005 status.go:255] checking status of ha-055395 ...
	I0826 11:10:21.907488  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:21.907552  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:21.923813  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0826 11:10:21.924245  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:21.924870  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:21.924896  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:21.925263  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:21.925458  122005 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:10:21.927263  122005 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:10:21.927282  122005 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:21.927580  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:21.927623  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:21.944520  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36755
	I0826 11:10:21.944998  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:21.945547  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:21.945588  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:21.946017  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:21.946255  122005 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:10:21.949736  122005 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:21.950187  122005 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:21.950214  122005 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:21.950399  122005 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:21.950711  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:21.950767  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:21.967453  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0826 11:10:21.967927  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:21.968416  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:21.968440  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:21.968893  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:21.969182  122005 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:10:21.969437  122005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:21.969493  122005 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:10:21.972514  122005 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:21.972977  122005 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:21.973020  122005 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:21.973161  122005 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:10:21.973351  122005 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:10:21.973513  122005 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:10:21.973759  122005 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:10:22.056021  122005 ssh_runner.go:195] Run: systemctl --version
	I0826 11:10:22.062416  122005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:22.080318  122005 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:22.080357  122005 api_server.go:166] Checking apiserver status ...
	I0826 11:10:22.080395  122005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:22.096677  122005 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0826 11:10:22.108298  122005 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:22.108378  122005 ssh_runner.go:195] Run: ls
	I0826 11:10:22.113490  122005 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:22.118100  122005 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:22.118131  122005 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:10:22.118142  122005 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:22.118161  122005 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:10:22.118453  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:22.118490  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:22.135341  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0826 11:10:22.135854  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:22.136373  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:22.136395  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:22.136717  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:22.136944  122005 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:10:22.138863  122005 status.go:330] ha-055395-m02 host status = "Running" (err=<nil>)
	I0826 11:10:22.138893  122005 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:22.139235  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:22.139292  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:22.157182  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0826 11:10:22.157908  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:22.158478  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:22.158499  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:22.158883  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:22.159120  122005 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:10:22.162674  122005 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:22.163263  122005 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:22.163292  122005 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:22.163537  122005 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:22.163855  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:22.163909  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:22.181041  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
	I0826 11:10:22.181482  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:22.181963  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:22.181987  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:22.182296  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:22.182470  122005 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:10:22.182695  122005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:22.182722  122005 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:10:22.186073  122005 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:22.186587  122005 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:22.186629  122005 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:22.187001  122005 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:10:22.187215  122005 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:10:22.187385  122005 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:10:22.187577  122005 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	W0826 11:10:22.831129  122005 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:22.831184  122005 retry.go:31] will retry after 194.0606ms: dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:26.095121  122005 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:26.095261  122005 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0826 11:10:26.095293  122005 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:26.095303  122005 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 11:10:26.095327  122005 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:26.095334  122005 status.go:255] checking status of ha-055395-m03 ...
	I0826 11:10:26.095684  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:26.095732  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:26.111591  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0826 11:10:26.112118  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:26.112583  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:26.112605  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:26.113005  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:26.113242  122005 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:10:26.114951  122005 status.go:330] ha-055395-m03 host status = "Running" (err=<nil>)
	I0826 11:10:26.114972  122005 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:26.115383  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:26.115448  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:26.130931  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42149
	I0826 11:10:26.131466  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:26.131999  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:26.132023  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:26.132346  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:26.132564  122005 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:10:26.135940  122005 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:26.136414  122005 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:26.136443  122005 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:26.136592  122005 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:26.136881  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:26.136920  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:26.153716  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0826 11:10:26.154250  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:26.154822  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:26.154873  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:26.155250  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:26.155457  122005 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:10:26.155639  122005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:26.155697  122005 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:10:26.158997  122005 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:26.159415  122005 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:26.159447  122005 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:26.159585  122005 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:10:26.159778  122005 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:10:26.159950  122005 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:10:26.160118  122005 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:10:26.246959  122005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:26.263164  122005 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:26.263194  122005 api_server.go:166] Checking apiserver status ...
	I0826 11:10:26.263239  122005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:26.277957  122005 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	W0826 11:10:26.290432  122005 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:26.290511  122005 ssh_runner.go:195] Run: ls
	I0826 11:10:26.296748  122005 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:26.302603  122005 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:26.302637  122005 status.go:422] ha-055395-m03 apiserver status = Running (err=<nil>)
	I0826 11:10:26.302655  122005 status.go:257] ha-055395-m03 status: &{Name:ha-055395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:26.302683  122005 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:10:26.303079  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:26.303124  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:26.319760  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37539
	I0826 11:10:26.320273  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:26.320865  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:26.320894  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:26.321294  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:26.321534  122005 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:10:26.323370  122005 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:10:26.323388  122005 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:26.323758  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:26.323826  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:26.339224  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I0826 11:10:26.339708  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:26.340180  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:26.340202  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:26.340500  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:26.340680  122005 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:10:26.343419  122005 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:26.343900  122005 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:26.343940  122005 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:26.344116  122005 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:26.344450  122005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:26.344494  122005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:26.359795  122005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I0826 11:10:26.360250  122005 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:26.360803  122005 main.go:141] libmachine: Using API Version  1
	I0826 11:10:26.360830  122005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:26.361180  122005 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:26.361420  122005 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:10:26.361630  122005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:26.361656  122005 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:10:26.365005  122005 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:26.365479  122005 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:26.365510  122005 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:26.365719  122005 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:10:26.365985  122005 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:10:26.366171  122005 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:10:26.366341  122005 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:10:26.450274  122005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:26.464944  122005 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 3 (4.228560551s)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-055395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:10:28.587542  122106 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:10:28.587806  122106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:28.587817  122106 out.go:358] Setting ErrFile to fd 2...
	I0826 11:10:28.587824  122106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:28.588037  122106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:10:28.588209  122106 out.go:352] Setting JSON to false
	I0826 11:10:28.588236  122106 mustload.go:65] Loading cluster: ha-055395
	I0826 11:10:28.588297  122106 notify.go:220] Checking for updates...
	I0826 11:10:28.588756  122106 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:10:28.588779  122106 status.go:255] checking status of ha-055395 ...
	I0826 11:10:28.589272  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:28.589362  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:28.606385  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41341
	I0826 11:10:28.606904  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:28.607557  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:28.607575  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:28.607985  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:28.608204  122106 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:10:28.610269  122106 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:10:28.610302  122106 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:28.610593  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:28.610648  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:28.627318  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0826 11:10:28.627898  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:28.628389  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:28.628420  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:28.628751  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:28.629003  122106 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:10:28.632564  122106 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:28.633068  122106 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:28.633109  122106 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:28.633236  122106 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:28.633535  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:28.633577  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:28.649750  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0826 11:10:28.650391  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:28.651024  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:28.651056  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:28.651649  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:28.651865  122106 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:10:28.652087  122106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:28.652117  122106 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:10:28.655252  122106 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:28.655769  122106 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:28.655807  122106 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:28.655904  122106 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:10:28.656087  122106 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:10:28.656274  122106 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:10:28.656444  122106 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:10:28.734573  122106 ssh_runner.go:195] Run: systemctl --version
	I0826 11:10:28.741308  122106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:28.760792  122106 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:28.760835  122106 api_server.go:166] Checking apiserver status ...
	I0826 11:10:28.760889  122106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:28.774959  122106 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0826 11:10:28.784646  122106 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:28.784711  122106 ssh_runner.go:195] Run: ls
	I0826 11:10:28.790179  122106 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:28.795725  122106 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:28.795754  122106 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:10:28.795771  122106 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:28.795789  122106 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:10:28.796198  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:28.796246  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:28.814070  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I0826 11:10:28.814512  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:28.815058  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:28.815084  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:28.815525  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:28.815744  122106 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:10:28.817479  122106 status.go:330] ha-055395-m02 host status = "Running" (err=<nil>)
	I0826 11:10:28.817499  122106 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:28.817831  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:28.817873  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:28.835093  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0826 11:10:28.835648  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:28.836240  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:28.836264  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:28.836586  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:28.836794  122106 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:10:28.839633  122106 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:28.840086  122106 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:28.840117  122106 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:28.840286  122106 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:28.840601  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:28.840638  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:28.858135  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45745
	I0826 11:10:28.858625  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:28.859187  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:28.859217  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:28.859555  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:28.859793  122106 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:10:28.859987  122106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:28.860012  122106 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:10:28.863411  122106 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:28.863824  122106 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:28.863856  122106 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:28.864047  122106 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:10:28.864219  122106 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:10:28.864394  122106 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:10:28.864613  122106 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	W0826 11:10:29.171074  122106 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:29.171131  122106 retry.go:31] will retry after 171.247716ms: dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:32.399137  122106 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:32.399234  122106 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0826 11:10:32.399252  122106 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:32.399265  122106 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 11:10:32.399287  122106 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:32.399295  122106 status.go:255] checking status of ha-055395-m03 ...
	I0826 11:10:32.399702  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:32.399754  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:32.419201  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I0826 11:10:32.419829  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:32.420395  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:32.420421  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:32.420874  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:32.421108  122106 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:10:32.423057  122106 status.go:330] ha-055395-m03 host status = "Running" (err=<nil>)
	I0826 11:10:32.423074  122106 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:32.423372  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:32.423407  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:32.438696  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45989
	I0826 11:10:32.439210  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:32.439716  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:32.439740  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:32.440055  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:32.440266  122106 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:10:32.443194  122106 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:32.443660  122106 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:32.443693  122106 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:32.443845  122106 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:32.444179  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:32.444226  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:32.459883  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0826 11:10:32.460365  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:32.460900  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:32.460920  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:32.461302  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:32.461543  122106 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:10:32.461789  122106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:32.461816  122106 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:10:32.465073  122106 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:32.465579  122106 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:32.465609  122106 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:32.465738  122106 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:10:32.465947  122106 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:10:32.466167  122106 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:10:32.466337  122106 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:10:32.550416  122106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:32.566755  122106 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:32.566788  122106 api_server.go:166] Checking apiserver status ...
	I0826 11:10:32.566828  122106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:32.580731  122106 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	W0826 11:10:32.591164  122106 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:32.591228  122106 ssh_runner.go:195] Run: ls
	I0826 11:10:32.596118  122106 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:32.602331  122106 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:32.602364  122106 status.go:422] ha-055395-m03 apiserver status = Running (err=<nil>)
	I0826 11:10:32.602376  122106 status.go:257] ha-055395-m03 status: &{Name:ha-055395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:32.602400  122106 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:10:32.602784  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:32.602851  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:32.619823  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I0826 11:10:32.620355  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:32.620911  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:32.620934  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:32.621285  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:32.621501  122106 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:10:32.623131  122106 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:10:32.623150  122106 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:32.623533  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:32.623583  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:32.640149  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I0826 11:10:32.640679  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:32.641179  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:32.641200  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:32.641515  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:32.641722  122106 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:10:32.645281  122106 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:32.645759  122106 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:32.645791  122106 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:32.645927  122106 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:32.646327  122106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:32.646379  122106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:32.663087  122106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I0826 11:10:32.663558  122106 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:32.664077  122106 main.go:141] libmachine: Using API Version  1
	I0826 11:10:32.664097  122106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:32.664440  122106 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:32.664759  122106 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:10:32.664971  122106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:32.664996  122106 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:10:32.668187  122106 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:32.668807  122106 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:32.668832  122106 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:32.669018  122106 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:10:32.669242  122106 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:10:32.669420  122106 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:10:32.669601  122106 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:10:32.754375  122106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:32.769526  122106 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 3 (4.351003957s)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-055395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:10:34.894444  122206 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:10:34.894614  122206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:34.894626  122206 out.go:358] Setting ErrFile to fd 2...
	I0826 11:10:34.894630  122206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:34.894812  122206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:10:34.895033  122206 out.go:352] Setting JSON to false
	I0826 11:10:34.895064  122206 mustload.go:65] Loading cluster: ha-055395
	I0826 11:10:34.895128  122206 notify.go:220] Checking for updates...
	I0826 11:10:34.895561  122206 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:10:34.895584  122206 status.go:255] checking status of ha-055395 ...
	I0826 11:10:34.896014  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:34.896082  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:34.913378  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0826 11:10:34.913859  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:34.914482  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:34.914506  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:34.915008  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:34.915308  122206 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:10:34.917048  122206 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:10:34.917070  122206 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:34.917402  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:34.917452  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:34.933777  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I0826 11:10:34.934271  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:34.934939  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:34.934966  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:34.935298  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:34.935495  122206 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:10:34.938578  122206 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:34.939051  122206 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:34.939089  122206 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:34.939228  122206 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:34.939538  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:34.939603  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:34.956057  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0826 11:10:34.956504  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:34.957058  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:34.957082  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:34.957444  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:34.957657  122206 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:10:34.957913  122206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:34.957955  122206 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:10:34.961323  122206 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:34.961787  122206 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:34.961826  122206 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:34.961984  122206 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:10:34.962214  122206 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:10:34.962359  122206 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:10:34.962535  122206 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:10:35.042714  122206 ssh_runner.go:195] Run: systemctl --version
	I0826 11:10:35.049508  122206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:35.075537  122206 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:35.075573  122206 api_server.go:166] Checking apiserver status ...
	I0826 11:10:35.075615  122206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:35.094605  122206 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0826 11:10:35.105932  122206 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:35.105991  122206 ssh_runner.go:195] Run: ls
	I0826 11:10:35.112452  122206 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:35.116889  122206 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:35.116913  122206 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:10:35.116933  122206 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:35.116952  122206 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:10:35.117234  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:35.117270  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:35.132780  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0826 11:10:35.133267  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:35.133803  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:35.133832  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:35.134179  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:35.134392  122206 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:10:35.136259  122206 status.go:330] ha-055395-m02 host status = "Running" (err=<nil>)
	I0826 11:10:35.136279  122206 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:35.136603  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:35.136641  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:35.153136  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0826 11:10:35.153566  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:35.154084  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:35.154108  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:35.154456  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:35.154651  122206 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:10:35.157381  122206 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:35.157878  122206 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:35.157926  122206 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:35.158150  122206 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:35.158458  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:35.158500  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:35.176186  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0826 11:10:35.176704  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:35.177212  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:35.177237  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:35.177539  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:35.177761  122206 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:10:35.177980  122206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:35.178009  122206 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:10:35.181324  122206 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:35.181767  122206 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:35.181795  122206 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:35.181982  122206 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:10:35.182159  122206 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:10:35.182325  122206 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:10:35.182449  122206 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	W0826 11:10:35.471091  122206 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:35.471141  122206 retry.go:31] will retry after 285.690753ms: dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:38.831158  122206 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:38.831291  122206 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0826 11:10:38.831317  122206 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:38.831329  122206 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 11:10:38.831360  122206 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:38.831375  122206 status.go:255] checking status of ha-055395-m03 ...
	I0826 11:10:38.832077  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:38.832143  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:38.849476  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I0826 11:10:38.849997  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:38.850565  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:38.850591  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:38.850935  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:38.851151  122206 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:10:38.852719  122206 status.go:330] ha-055395-m03 host status = "Running" (err=<nil>)
	I0826 11:10:38.852740  122206 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:38.853025  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:38.853071  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:38.869052  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44831
	I0826 11:10:38.869546  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:38.870058  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:38.870079  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:38.870391  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:38.870626  122206 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:10:38.873639  122206 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:38.874051  122206 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:38.874086  122206 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:38.874208  122206 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:38.874546  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:38.874596  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:38.891442  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0826 11:10:38.891953  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:38.892411  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:38.892432  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:38.892833  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:38.893065  122206 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:10:38.893259  122206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:38.893281  122206 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:10:38.896299  122206 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:38.896756  122206 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:38.896791  122206 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:38.896971  122206 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:10:38.897167  122206 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:10:38.897301  122206 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:10:38.897410  122206 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:10:38.982810  122206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:38.999344  122206 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:38.999373  122206 api_server.go:166] Checking apiserver status ...
	I0826 11:10:38.999405  122206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:39.013712  122206 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	W0826 11:10:39.024385  122206 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:39.024458  122206 ssh_runner.go:195] Run: ls
	I0826 11:10:39.029203  122206 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:39.033356  122206 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:39.033385  122206 status.go:422] ha-055395-m03 apiserver status = Running (err=<nil>)
	I0826 11:10:39.033397  122206 status.go:257] ha-055395-m03 status: &{Name:ha-055395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:39.033423  122206 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:10:39.033798  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:39.033837  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:39.049550  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35829
	I0826 11:10:39.050053  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:39.050559  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:39.050588  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:39.050961  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:39.051204  122206 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:10:39.052960  122206 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:10:39.052979  122206 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:39.053353  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:39.053403  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:39.069997  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0826 11:10:39.070501  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:39.071039  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:39.071060  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:39.071361  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:39.071546  122206 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:10:39.074618  122206 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:39.075171  122206 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:39.075211  122206 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:39.075399  122206 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:39.075689  122206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:39.075727  122206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:39.091186  122206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36173
	I0826 11:10:39.091734  122206 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:39.092270  122206 main.go:141] libmachine: Using API Version  1
	I0826 11:10:39.092301  122206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:39.092701  122206 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:39.092926  122206 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:10:39.093130  122206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:39.093156  122206 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:10:39.096212  122206 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:39.096548  122206 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:39.096580  122206 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:39.096677  122206 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:10:39.096860  122206 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:10:39.096996  122206 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:10:39.097183  122206 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:10:39.183200  122206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:39.198561  122206 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 3 (3.773214198s)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-055395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:10:45.553364  122322 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:10:45.553495  122322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:45.553503  122322 out.go:358] Setting ErrFile to fd 2...
	I0826 11:10:45.553508  122322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:10:45.553736  122322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:10:45.553947  122322 out.go:352] Setting JSON to false
	I0826 11:10:45.553987  122322 mustload.go:65] Loading cluster: ha-055395
	I0826 11:10:45.554128  122322 notify.go:220] Checking for updates...
	I0826 11:10:45.554457  122322 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:10:45.554476  122322 status.go:255] checking status of ha-055395 ...
	I0826 11:10:45.555055  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:45.555130  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:45.570779  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I0826 11:10:45.571499  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:45.572167  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:45.572193  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:45.572616  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:45.572847  122322 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:10:45.574709  122322 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:10:45.574727  122322 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:45.575110  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:45.575163  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:45.590517  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36227
	I0826 11:10:45.591036  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:45.591538  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:45.591562  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:45.591878  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:45.592086  122322 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:10:45.595294  122322 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:45.595957  122322 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:45.595991  122322 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:45.596066  122322 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:10:45.596370  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:45.596409  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:45.612248  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0826 11:10:45.612707  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:45.613178  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:45.613198  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:45.613618  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:45.613836  122322 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:10:45.614092  122322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:45.614187  122322 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:10:45.617167  122322 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:45.617555  122322 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:10:45.617590  122322 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:10:45.617723  122322 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:10:45.617915  122322 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:10:45.618084  122322 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:10:45.618241  122322 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:10:45.703831  122322 ssh_runner.go:195] Run: systemctl --version
	I0826 11:10:45.711402  122322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:45.727177  122322 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:45.727214  122322 api_server.go:166] Checking apiserver status ...
	I0826 11:10:45.727256  122322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:45.742397  122322 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0826 11:10:45.752426  122322 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:45.752496  122322 ssh_runner.go:195] Run: ls
	I0826 11:10:45.756730  122322 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:45.764325  122322 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:45.764363  122322 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:10:45.764377  122322 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:45.764402  122322 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:10:45.764735  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:45.764778  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:45.781238  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0826 11:10:45.781723  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:45.782252  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:45.782280  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:45.782635  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:45.782896  122322 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:10:45.784899  122322 status.go:330] ha-055395-m02 host status = "Running" (err=<nil>)
	I0826 11:10:45.784918  122322 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:45.785219  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:45.785264  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:45.804709  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0826 11:10:45.805478  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:45.806142  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:45.806174  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:45.806570  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:45.806824  122322 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:10:45.810164  122322 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:45.810820  122322 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:45.810912  122322 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:45.811113  122322 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:10:45.811543  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:45.811595  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:45.827920  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41939
	I0826 11:10:45.828355  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:45.828925  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:45.828958  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:45.829279  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:45.829470  122322 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:10:45.829704  122322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:45.829730  122322 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:10:45.832585  122322 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:45.833083  122322 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:10:45.833100  122322 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:10:45.833263  122322 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:10:45.833449  122322 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:10:45.833594  122322 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:10:45.833877  122322 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	W0826 11:10:48.911148  122322 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.55:22: connect: no route to host
	W0826 11:10:48.911244  122322 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0826 11:10:48.911276  122322 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:48.911291  122322 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0826 11:10:48.911309  122322 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	I0826 11:10:48.911318  122322 status.go:255] checking status of ha-055395-m03 ...
	I0826 11:10:48.911645  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:48.911708  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:48.928571  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
	I0826 11:10:48.929101  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:48.929653  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:48.929683  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:48.930026  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:48.930246  122322 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:10:48.932593  122322 status.go:330] ha-055395-m03 host status = "Running" (err=<nil>)
	I0826 11:10:48.932621  122322 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:48.932986  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:48.933051  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:48.949052  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0826 11:10:48.949585  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:48.950198  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:48.950218  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:48.950564  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:48.950849  122322 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:10:48.954397  122322 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:48.954939  122322 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:48.954972  122322 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:48.955117  122322 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:10:48.955419  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:48.955461  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:48.972376  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0826 11:10:48.972857  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:48.973329  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:48.973355  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:48.973733  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:48.973925  122322 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:10:48.974116  122322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:48.974135  122322 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:10:48.977303  122322 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:48.977761  122322 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:10:48.977781  122322 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:10:48.978031  122322 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:10:48.978252  122322 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:10:48.978419  122322 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:10:48.978585  122322 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:10:49.062439  122322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:49.077662  122322 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:10:49.077700  122322 api_server.go:166] Checking apiserver status ...
	I0826 11:10:49.077750  122322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:10:49.093452  122322 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	W0826 11:10:49.104529  122322 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:10:49.104590  122322 ssh_runner.go:195] Run: ls
	I0826 11:10:49.109323  122322 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:10:49.115839  122322 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:10:49.115868  122322 status.go:422] ha-055395-m03 apiserver status = Running (err=<nil>)
	I0826 11:10:49.115878  122322 status.go:257] ha-055395-m03 status: &{Name:ha-055395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:10:49.115897  122322 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:10:49.116247  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:49.116291  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:49.131528  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0826 11:10:49.131976  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:49.132414  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:49.132435  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:49.132773  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:49.132963  122322 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:10:49.134583  122322 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:10:49.134602  122322 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:49.134930  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:49.134976  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:49.150015  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I0826 11:10:49.150423  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:49.150943  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:49.150965  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:49.151279  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:49.151472  122322 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:10:49.154224  122322 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:49.154612  122322 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:49.154645  122322 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:49.154804  122322 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:10:49.155131  122322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:10:49.155167  122322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:10:49.170880  122322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0826 11:10:49.171302  122322 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:10:49.171744  122322 main.go:141] libmachine: Using API Version  1
	I0826 11:10:49.171767  122322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:10:49.172126  122322 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:10:49.172333  122322 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:10:49.172552  122322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:10:49.172574  122322 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:10:49.175332  122322 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:49.175761  122322 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:10:49.175793  122322 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:10:49.175905  122322 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:10:49.176079  122322 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:10:49.176240  122322 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:10:49.176373  122322 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:10:49.262958  122322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:10:49.276930  122322 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 7 (680.476211ms)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:11:00.606591  122492 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:11:00.606888  122492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:11:00.606900  122492 out.go:358] Setting ErrFile to fd 2...
	I0826 11:11:00.606905  122492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:11:00.607080  122492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:11:00.607253  122492 out.go:352] Setting JSON to false
	I0826 11:11:00.607285  122492 mustload.go:65] Loading cluster: ha-055395
	I0826 11:11:00.607407  122492 notify.go:220] Checking for updates...
	I0826 11:11:00.607840  122492 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:11:00.607862  122492 status.go:255] checking status of ha-055395 ...
	I0826 11:11:00.608333  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:00.608371  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:00.630133  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0826 11:11:00.630643  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:00.631336  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:00.631372  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:00.631729  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:00.631940  122492 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:11:00.633871  122492 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:11:00.633889  122492 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:11:00.634190  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:00.634226  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:00.650141  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
	I0826 11:11:00.650829  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:00.651738  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:00.651787  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:00.652150  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:00.652330  122492 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:11:00.656103  122492 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:11:00.656482  122492 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:11:00.656510  122492 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:11:00.656767  122492 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:11:00.657118  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:00.657168  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:00.672677  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34491
	I0826 11:11:00.673169  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:00.673664  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:00.673688  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:00.674035  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:00.674384  122492 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:11:00.674698  122492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:11:00.674786  122492 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:11:00.678271  122492 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:11:00.678723  122492 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:11:00.678752  122492 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:11:00.678989  122492 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:11:00.679264  122492 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:11:00.679454  122492 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:11:00.679707  122492 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:11:00.770015  122492 ssh_runner.go:195] Run: systemctl --version
	I0826 11:11:00.776915  122492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:11:00.792863  122492 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:11:00.792906  122492 api_server.go:166] Checking apiserver status ...
	I0826 11:11:00.792945  122492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:11:00.808599  122492 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0826 11:11:00.822042  122492 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:11:00.822120  122492 ssh_runner.go:195] Run: ls
	I0826 11:11:00.827583  122492 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:11:00.834108  122492 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:11:00.834145  122492 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:11:00.834160  122492 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:11:00.834190  122492 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:11:00.834488  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:00.834529  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:00.851131  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I0826 11:11:00.851564  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:00.852044  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:00.852066  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:00.852436  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:00.852690  122492 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:11:00.854454  122492 status.go:330] ha-055395-m02 host status = "Stopped" (err=<nil>)
	I0826 11:11:00.854467  122492 status.go:343] host is not running, skipping remaining checks
	I0826 11:11:00.854473  122492 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:11:00.854491  122492 status.go:255] checking status of ha-055395-m03 ...
	I0826 11:11:00.854780  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:00.854817  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:00.872182  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0826 11:11:00.872693  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:00.873250  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:00.873280  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:00.873657  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:00.873943  122492 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:11:00.875976  122492 status.go:330] ha-055395-m03 host status = "Running" (err=<nil>)
	I0826 11:11:00.875997  122492 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:11:00.876304  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:00.876357  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:00.893709  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0826 11:11:00.894225  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:00.894727  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:00.894750  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:00.895132  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:00.895381  122492 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:11:00.902339  122492 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:11:00.902712  122492 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:11:00.902744  122492 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:11:00.902985  122492 host.go:66] Checking if "ha-055395-m03" exists ...
	I0826 11:11:00.903294  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:00.903351  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:00.919619  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0826 11:11:00.920131  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:00.920589  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:00.920614  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:00.920910  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:00.921183  122492 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:11:00.921370  122492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:11:00.921395  122492 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:11:00.924377  122492 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:11:00.924826  122492 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:11:00.924881  122492 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:11:00.924998  122492 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:11:00.925189  122492 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:11:00.925330  122492 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:11:00.925494  122492 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:11:01.016010  122492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:11:01.036306  122492 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:11:01.036341  122492 api_server.go:166] Checking apiserver status ...
	I0826 11:11:01.036376  122492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:11:01.052110  122492 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	W0826 11:11:01.062396  122492 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:11:01.062471  122492 ssh_runner.go:195] Run: ls
	I0826 11:11:01.069722  122492 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:11:01.074238  122492 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:11:01.074269  122492 status.go:422] ha-055395-m03 apiserver status = Running (err=<nil>)
	I0826 11:11:01.074278  122492 status.go:257] ha-055395-m03 status: &{Name:ha-055395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:11:01.074297  122492 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:11:01.074588  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:01.074622  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:01.091648  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I0826 11:11:01.092165  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:01.092787  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:01.092815  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:01.093169  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:01.093375  122492 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:11:01.095092  122492 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:11:01.095117  122492 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:11:01.095531  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:01.095589  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:01.111300  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0826 11:11:01.111804  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:01.112299  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:01.112326  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:01.112675  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:01.112880  122492 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:11:01.116146  122492 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:11:01.116610  122492 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:11:01.116652  122492 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:11:01.116920  122492 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:11:01.117249  122492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:01.117292  122492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:01.133905  122492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0826 11:11:01.134399  122492 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:01.134937  122492 main.go:141] libmachine: Using API Version  1
	I0826 11:11:01.134964  122492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:01.135371  122492 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:01.135597  122492 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:11:01.135838  122492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:11:01.135860  122492 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:11:01.139682  122492 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:11:01.140234  122492 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:11:01.140259  122492 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:11:01.140406  122492 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:11:01.140632  122492 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:11:01.140868  122492 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:11:01.141045  122492 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:11:01.226401  122492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:11:01.240730  122492 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-055395 -n ha-055395
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-055395 logs -n 25: (1.413568603s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395:/home/docker/cp-test_ha-055395-m03_ha-055395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395 sudo cat                                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m02:/home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m02 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04:/home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m04 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp testdata/cp-test.txt                                                | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395:/home/docker/cp-test_ha-055395-m04_ha-055395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395 sudo cat                                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m02:/home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m02 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03:/home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m03 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-055395 node stop m02 -v=7                                                     | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-055395 node start m02 -v=7                                                    | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 11:03:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 11:03:09.834067  117024 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:03:09.834452  117024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:03:09.834464  117024 out.go:358] Setting ErrFile to fd 2...
	I0826 11:03:09.834471  117024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:03:09.834703  117024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:03:09.835384  117024 out.go:352] Setting JSON to false
	I0826 11:03:09.836326  117024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2731,"bootTime":1724667459,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:03:09.836399  117024 start.go:139] virtualization: kvm guest
	I0826 11:03:09.838707  117024 out.go:177] * [ha-055395] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:03:09.840354  117024 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:03:09.840456  117024 notify.go:220] Checking for updates...
	I0826 11:03:09.843077  117024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:03:09.844558  117024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:03:09.845871  117024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:09.847213  117024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:03:09.848484  117024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:03:09.850036  117024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:03:09.886784  117024 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 11:03:09.888406  117024 start.go:297] selected driver: kvm2
	I0826 11:03:09.888434  117024 start.go:901] validating driver "kvm2" against <nil>
	I0826 11:03:09.888446  117024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:03:09.889211  117024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:03:09.889284  117024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:03:09.905954  117024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:03:09.906005  117024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 11:03:09.906210  117024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:03:09.906246  117024 cni.go:84] Creating CNI manager for ""
	I0826 11:03:09.906258  117024 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0826 11:03:09.906266  117024 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0826 11:03:09.906313  117024 start.go:340] cluster config:
	{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0826 11:03:09.906422  117024 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:03:09.908564  117024 out.go:177] * Starting "ha-055395" primary control-plane node in "ha-055395" cluster
	I0826 11:03:09.909846  117024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:03:09.909889  117024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:03:09.909896  117024 cache.go:56] Caching tarball of preloaded images
	I0826 11:03:09.909993  117024 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:03:09.910005  117024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:03:09.910292  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:03:09.910312  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json: {Name:mk57a761cf1d0c8f62f7f6828100d65bc5ffba3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:09.910450  117024 start.go:360] acquireMachinesLock for ha-055395: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:03:09.910485  117024 start.go:364] duration metric: took 22.171µs to acquireMachinesLock for "ha-055395"
	I0826 11:03:09.910502  117024 start.go:93] Provisioning new machine with config: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:03:09.910574  117024 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 11:03:09.912342  117024 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 11:03:09.912478  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:09.912503  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:09.927829  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46529
	I0826 11:03:09.928348  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:09.928999  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:09.929030  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:09.929451  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:09.929667  117024 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:03:09.929851  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:09.930021  117024 start.go:159] libmachine.API.Create for "ha-055395" (driver="kvm2")
	I0826 11:03:09.930074  117024 client.go:168] LocalClient.Create starting
	I0826 11:03:09.930124  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 11:03:09.930164  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:03:09.930183  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:03:09.930256  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 11:03:09.930289  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:03:09.930306  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:03:09.930335  117024 main.go:141] libmachine: Running pre-create checks...
	I0826 11:03:09.930346  117024 main.go:141] libmachine: (ha-055395) Calling .PreCreateCheck
	I0826 11:03:09.930719  117024 main.go:141] libmachine: (ha-055395) Calling .GetConfigRaw
	I0826 11:03:09.931257  117024 main.go:141] libmachine: Creating machine...
	I0826 11:03:09.931270  117024 main.go:141] libmachine: (ha-055395) Calling .Create
	I0826 11:03:09.931409  117024 main.go:141] libmachine: (ha-055395) Creating KVM machine...
	I0826 11:03:09.933244  117024 main.go:141] libmachine: (ha-055395) DBG | found existing default KVM network
	I0826 11:03:09.934337  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:09.934167  117048 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c00}
	I0826 11:03:09.934398  117024 main.go:141] libmachine: (ha-055395) DBG | created network xml: 
	I0826 11:03:09.934427  117024 main.go:141] libmachine: (ha-055395) DBG | <network>
	I0826 11:03:09.934437  117024 main.go:141] libmachine: (ha-055395) DBG |   <name>mk-ha-055395</name>
	I0826 11:03:09.934443  117024 main.go:141] libmachine: (ha-055395) DBG |   <dns enable='no'/>
	I0826 11:03:09.934451  117024 main.go:141] libmachine: (ha-055395) DBG |   
	I0826 11:03:09.934459  117024 main.go:141] libmachine: (ha-055395) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0826 11:03:09.934482  117024 main.go:141] libmachine: (ha-055395) DBG |     <dhcp>
	I0826 11:03:09.934506  117024 main.go:141] libmachine: (ha-055395) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0826 11:03:09.934583  117024 main.go:141] libmachine: (ha-055395) DBG |     </dhcp>
	I0826 11:03:09.934616  117024 main.go:141] libmachine: (ha-055395) DBG |   </ip>
	I0826 11:03:09.934629  117024 main.go:141] libmachine: (ha-055395) DBG |   
	I0826 11:03:09.934641  117024 main.go:141] libmachine: (ha-055395) DBG | </network>
	I0826 11:03:09.934651  117024 main.go:141] libmachine: (ha-055395) DBG | 
	I0826 11:03:09.939867  117024 main.go:141] libmachine: (ha-055395) DBG | trying to create private KVM network mk-ha-055395 192.168.39.0/24...
	I0826 11:03:10.013535  117024 main.go:141] libmachine: (ha-055395) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395 ...
	I0826 11:03:10.013578  117024 main.go:141] libmachine: (ha-055395) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 11:03:10.013593  117024 main.go:141] libmachine: (ha-055395) DBG | private KVM network mk-ha-055395 192.168.39.0/24 created
	I0826 11:03:10.013610  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:10.013438  117048 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:10.013629  117024 main.go:141] libmachine: (ha-055395) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 11:03:10.292908  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:10.292769  117048 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa...
	I0826 11:03:10.387887  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:10.387727  117048 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/ha-055395.rawdisk...
	I0826 11:03:10.387917  117024 main.go:141] libmachine: (ha-055395) DBG | Writing magic tar header
	I0826 11:03:10.387930  117024 main.go:141] libmachine: (ha-055395) DBG | Writing SSH key tar header
	I0826 11:03:10.387941  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:10.387879  117048 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395 ...
	I0826 11:03:10.387956  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395
	I0826 11:03:10.387973  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395 (perms=drwx------)
	I0826 11:03:10.387995  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 11:03:10.388005  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 11:03:10.388019  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 11:03:10.388033  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 11:03:10.388119  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 11:03:10.388156  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:10.388164  117024 main.go:141] libmachine: (ha-055395) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 11:03:10.388179  117024 main.go:141] libmachine: (ha-055395) Creating domain...
	I0826 11:03:10.388224  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 11:03:10.388258  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 11:03:10.388274  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home/jenkins
	I0826 11:03:10.388288  117024 main.go:141] libmachine: (ha-055395) DBG | Checking permissions on dir: /home
	I0826 11:03:10.388313  117024 main.go:141] libmachine: (ha-055395) DBG | Skipping /home - not owner
	I0826 11:03:10.389256  117024 main.go:141] libmachine: (ha-055395) define libvirt domain using xml: 
	I0826 11:03:10.389297  117024 main.go:141] libmachine: (ha-055395) <domain type='kvm'>
	I0826 11:03:10.389308  117024 main.go:141] libmachine: (ha-055395)   <name>ha-055395</name>
	I0826 11:03:10.389320  117024 main.go:141] libmachine: (ha-055395)   <memory unit='MiB'>2200</memory>
	I0826 11:03:10.389330  117024 main.go:141] libmachine: (ha-055395)   <vcpu>2</vcpu>
	I0826 11:03:10.389339  117024 main.go:141] libmachine: (ha-055395)   <features>
	I0826 11:03:10.389353  117024 main.go:141] libmachine: (ha-055395)     <acpi/>
	I0826 11:03:10.389361  117024 main.go:141] libmachine: (ha-055395)     <apic/>
	I0826 11:03:10.389371  117024 main.go:141] libmachine: (ha-055395)     <pae/>
	I0826 11:03:10.389383  117024 main.go:141] libmachine: (ha-055395)     
	I0826 11:03:10.389405  117024 main.go:141] libmachine: (ha-055395)   </features>
	I0826 11:03:10.389421  117024 main.go:141] libmachine: (ha-055395)   <cpu mode='host-passthrough'>
	I0826 11:03:10.389427  117024 main.go:141] libmachine: (ha-055395)   
	I0826 11:03:10.389435  117024 main.go:141] libmachine: (ha-055395)   </cpu>
	I0826 11:03:10.389440  117024 main.go:141] libmachine: (ha-055395)   <os>
	I0826 11:03:10.389447  117024 main.go:141] libmachine: (ha-055395)     <type>hvm</type>
	I0826 11:03:10.389453  117024 main.go:141] libmachine: (ha-055395)     <boot dev='cdrom'/>
	I0826 11:03:10.389461  117024 main.go:141] libmachine: (ha-055395)     <boot dev='hd'/>
	I0826 11:03:10.389466  117024 main.go:141] libmachine: (ha-055395)     <bootmenu enable='no'/>
	I0826 11:03:10.389473  117024 main.go:141] libmachine: (ha-055395)   </os>
	I0826 11:03:10.389478  117024 main.go:141] libmachine: (ha-055395)   <devices>
	I0826 11:03:10.389485  117024 main.go:141] libmachine: (ha-055395)     <disk type='file' device='cdrom'>
	I0826 11:03:10.389496  117024 main.go:141] libmachine: (ha-055395)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/boot2docker.iso'/>
	I0826 11:03:10.389505  117024 main.go:141] libmachine: (ha-055395)       <target dev='hdc' bus='scsi'/>
	I0826 11:03:10.389510  117024 main.go:141] libmachine: (ha-055395)       <readonly/>
	I0826 11:03:10.389517  117024 main.go:141] libmachine: (ha-055395)     </disk>
	I0826 11:03:10.389524  117024 main.go:141] libmachine: (ha-055395)     <disk type='file' device='disk'>
	I0826 11:03:10.389531  117024 main.go:141] libmachine: (ha-055395)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 11:03:10.389539  117024 main.go:141] libmachine: (ha-055395)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/ha-055395.rawdisk'/>
	I0826 11:03:10.389546  117024 main.go:141] libmachine: (ha-055395)       <target dev='hda' bus='virtio'/>
	I0826 11:03:10.389573  117024 main.go:141] libmachine: (ha-055395)     </disk>
	I0826 11:03:10.389601  117024 main.go:141] libmachine: (ha-055395)     <interface type='network'>
	I0826 11:03:10.389613  117024 main.go:141] libmachine: (ha-055395)       <source network='mk-ha-055395'/>
	I0826 11:03:10.389624  117024 main.go:141] libmachine: (ha-055395)       <model type='virtio'/>
	I0826 11:03:10.389643  117024 main.go:141] libmachine: (ha-055395)     </interface>
	I0826 11:03:10.389654  117024 main.go:141] libmachine: (ha-055395)     <interface type='network'>
	I0826 11:03:10.389662  117024 main.go:141] libmachine: (ha-055395)       <source network='default'/>
	I0826 11:03:10.389674  117024 main.go:141] libmachine: (ha-055395)       <model type='virtio'/>
	I0826 11:03:10.389693  117024 main.go:141] libmachine: (ha-055395)     </interface>
	I0826 11:03:10.389713  117024 main.go:141] libmachine: (ha-055395)     <serial type='pty'>
	I0826 11:03:10.389726  117024 main.go:141] libmachine: (ha-055395)       <target port='0'/>
	I0826 11:03:10.389735  117024 main.go:141] libmachine: (ha-055395)     </serial>
	I0826 11:03:10.389750  117024 main.go:141] libmachine: (ha-055395)     <console type='pty'>
	I0826 11:03:10.389763  117024 main.go:141] libmachine: (ha-055395)       <target type='serial' port='0'/>
	I0826 11:03:10.389774  117024 main.go:141] libmachine: (ha-055395)     </console>
	I0826 11:03:10.389791  117024 main.go:141] libmachine: (ha-055395)     <rng model='virtio'>
	I0826 11:03:10.389804  117024 main.go:141] libmachine: (ha-055395)       <backend model='random'>/dev/random</backend>
	I0826 11:03:10.389821  117024 main.go:141] libmachine: (ha-055395)     </rng>
	I0826 11:03:10.389834  117024 main.go:141] libmachine: (ha-055395)     
	I0826 11:03:10.389842  117024 main.go:141] libmachine: (ha-055395)     
	I0826 11:03:10.389861  117024 main.go:141] libmachine: (ha-055395)   </devices>
	I0826 11:03:10.389877  117024 main.go:141] libmachine: (ha-055395) </domain>
	I0826 11:03:10.389893  117024 main.go:141] libmachine: (ha-055395) 
	I0826 11:03:10.394426  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:d8:50:59 in network default
	I0826 11:03:10.395164  117024 main.go:141] libmachine: (ha-055395) Ensuring networks are active...
	I0826 11:03:10.395182  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:10.396007  117024 main.go:141] libmachine: (ha-055395) Ensuring network default is active
	I0826 11:03:10.396336  117024 main.go:141] libmachine: (ha-055395) Ensuring network mk-ha-055395 is active
	I0826 11:03:10.397011  117024 main.go:141] libmachine: (ha-055395) Getting domain xml...
	I0826 11:03:10.397964  117024 main.go:141] libmachine: (ha-055395) Creating domain...
	I0826 11:03:11.608496  117024 main.go:141] libmachine: (ha-055395) Waiting to get IP...
	I0826 11:03:11.609319  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:11.609774  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:11.609804  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:11.609742  117048 retry.go:31] will retry after 224.423543ms: waiting for machine to come up
	I0826 11:03:11.836297  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:11.836820  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:11.836848  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:11.836779  117048 retry.go:31] will retry after 265.180359ms: waiting for machine to come up
	I0826 11:03:12.103409  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:12.103948  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:12.104023  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:12.103928  117048 retry.go:31] will retry after 370.79504ms: waiting for machine to come up
	I0826 11:03:12.476765  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:12.477246  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:12.477275  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:12.477191  117048 retry.go:31] will retry after 384.306618ms: waiting for machine to come up
	I0826 11:03:12.862866  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:12.863312  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:12.863344  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:12.863261  117048 retry.go:31] will retry after 740.562218ms: waiting for machine to come up
	I0826 11:03:13.605198  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:13.605687  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:13.605716  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:13.605650  117048 retry.go:31] will retry after 788.816503ms: waiting for machine to come up
	I0826 11:03:14.395780  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:14.396420  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:14.396446  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:14.396366  117048 retry.go:31] will retry after 741.467845ms: waiting for machine to come up
	I0826 11:03:15.139957  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:15.140381  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:15.140402  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:15.140337  117048 retry.go:31] will retry after 1.206059591s: waiting for machine to come up
	I0826 11:03:16.347725  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:16.348134  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:16.348164  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:16.348092  117048 retry.go:31] will retry after 1.231399953s: waiting for machine to come up
	I0826 11:03:17.581476  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:17.582043  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:17.582063  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:17.581997  117048 retry.go:31] will retry after 1.657218554s: waiting for machine to come up
	I0826 11:03:19.240853  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:19.241329  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:19.241363  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:19.241273  117048 retry.go:31] will retry after 1.846849017s: waiting for machine to come up
	I0826 11:03:21.089350  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:21.089818  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:21.089849  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:21.089754  117048 retry.go:31] will retry after 2.497649926s: waiting for machine to come up
	I0826 11:03:23.590666  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:23.591127  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:23.591163  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:23.591086  117048 retry.go:31] will retry after 4.092248941s: waiting for machine to come up
	I0826 11:03:27.686813  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:27.687335  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find current IP address of domain ha-055395 in network mk-ha-055395
	I0826 11:03:27.687358  117024 main.go:141] libmachine: (ha-055395) DBG | I0826 11:03:27.687276  117048 retry.go:31] will retry after 5.278012607s: waiting for machine to come up
	I0826 11:03:32.968801  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:32.969342  117024 main.go:141] libmachine: (ha-055395) Found IP for machine: 192.168.39.150
	I0826 11:03:32.969360  117024 main.go:141] libmachine: (ha-055395) Reserving static IP address...
	I0826 11:03:32.969372  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has current primary IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:32.969826  117024 main.go:141] libmachine: (ha-055395) DBG | unable to find host DHCP lease matching {name: "ha-055395", mac: "52:54:00:91:82:8b", ip: "192.168.39.150"} in network mk-ha-055395
	I0826 11:03:33.052147  117024 main.go:141] libmachine: (ha-055395) DBG | Getting to WaitForSSH function...
	I0826 11:03:33.052237  117024 main.go:141] libmachine: (ha-055395) Reserved static IP address: 192.168.39.150
	I0826 11:03:33.052289  117024 main.go:141] libmachine: (ha-055395) Waiting for SSH to be available...
	I0826 11:03:33.056078  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.056568  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.056592  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.056711  117024 main.go:141] libmachine: (ha-055395) DBG | Using SSH client type: external
	I0826 11:03:33.056737  117024 main.go:141] libmachine: (ha-055395) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa (-rw-------)
	I0826 11:03:33.056766  117024 main.go:141] libmachine: (ha-055395) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:03:33.056775  117024 main.go:141] libmachine: (ha-055395) DBG | About to run SSH command:
	I0826 11:03:33.056786  117024 main.go:141] libmachine: (ha-055395) DBG | exit 0
	I0826 11:03:33.178938  117024 main.go:141] libmachine: (ha-055395) DBG | SSH cmd err, output: <nil>: 
	I0826 11:03:33.179239  117024 main.go:141] libmachine: (ha-055395) KVM machine creation complete!
	I0826 11:03:33.179607  117024 main.go:141] libmachine: (ha-055395) Calling .GetConfigRaw
	I0826 11:03:33.180172  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:33.180402  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:33.180592  117024 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 11:03:33.180608  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:03:33.181945  117024 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 11:03:33.181965  117024 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 11:03:33.181974  117024 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 11:03:33.181982  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.184830  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.185291  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.185326  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.185481  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.185692  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.185863  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.185989  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.186127  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.186361  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.186376  117024 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 11:03:33.286368  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:03:33.286397  117024 main.go:141] libmachine: Detecting the provisioner...
	I0826 11:03:33.286407  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.289364  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.289724  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.289754  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.289904  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.290096  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.290272  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.290395  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.290577  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.290750  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.290761  117024 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 11:03:33.391613  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 11:03:33.391685  117024 main.go:141] libmachine: found compatible host: buildroot
	I0826 11:03:33.391692  117024 main.go:141] libmachine: Provisioning with buildroot...
	I0826 11:03:33.391705  117024 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:03:33.392038  117024 buildroot.go:166] provisioning hostname "ha-055395"
	I0826 11:03:33.392073  117024 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:03:33.392344  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.395408  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.395727  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.395751  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.395938  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.396205  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.396421  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.396636  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.396831  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.397014  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.397025  117024 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-055395 && echo "ha-055395" | sudo tee /etc/hostname
	I0826 11:03:33.513672  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395
	
	I0826 11:03:33.513704  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.516623  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.516993  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.517032  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.517254  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.517472  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.517643  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.517818  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.518028  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.518217  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.518239  117024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-055395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-055395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-055395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:03:33.627944  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:03:33.627979  117024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:03:33.628039  117024 buildroot.go:174] setting up certificates
	I0826 11:03:33.628057  117024 provision.go:84] configureAuth start
	I0826 11:03:33.628073  117024 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:03:33.628380  117024 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:03:33.631377  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.631748  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.631772  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.631927  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.634204  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.634603  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.634631  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.634783  117024 provision.go:143] copyHostCerts
	I0826 11:03:33.634817  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:03:33.634872  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:03:33.634898  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:03:33.634985  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:03:33.635112  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:03:33.635142  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:03:33.635152  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:03:33.635193  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:03:33.635254  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:03:33.635277  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:03:33.635286  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:03:33.635320  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:03:33.635390  117024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.ha-055395 san=[127.0.0.1 192.168.39.150 ha-055395 localhost minikube]
	I0826 11:03:33.739702  117024 provision.go:177] copyRemoteCerts
	I0826 11:03:33.739767  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:03:33.739792  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.742758  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.743086  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.743130  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.743325  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.743520  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.743664  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.743807  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:33.824832  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:03:33.824939  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:03:33.849097  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:03:33.849187  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0826 11:03:33.871798  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:03:33.871885  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:03:33.894893  117024 provision.go:87] duration metric: took 266.81811ms to configureAuth
	I0826 11:03:33.894926  117024 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:03:33.895099  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:03:33.895174  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:33.898313  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.898706  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:33.898737  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:33.898965  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:33.899176  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.899351  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:33.899494  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:33.899668  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:33.899887  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:33.899903  117024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:03:34.153675  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:03:34.153708  117024 main.go:141] libmachine: Checking connection to Docker...
	I0826 11:03:34.153716  117024 main.go:141] libmachine: (ha-055395) Calling .GetURL
	I0826 11:03:34.155133  117024 main.go:141] libmachine: (ha-055395) DBG | Using libvirt version 6000000
	I0826 11:03:34.157382  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.157739  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.157761  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.157981  117024 main.go:141] libmachine: Docker is up and running!
	I0826 11:03:34.157999  117024 main.go:141] libmachine: Reticulating splines...
	I0826 11:03:34.158007  117024 client.go:171] duration metric: took 24.227921772s to LocalClient.Create
	I0826 11:03:34.158033  117024 start.go:167] duration metric: took 24.228015034s to libmachine.API.Create "ha-055395"
	I0826 11:03:34.158045  117024 start.go:293] postStartSetup for "ha-055395" (driver="kvm2")
	I0826 11:03:34.158060  117024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:03:34.158083  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.158362  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:03:34.158390  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:34.160846  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.161147  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.161172  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.161356  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:34.161539  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.161694  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:34.161890  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:34.240762  117024 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:03:34.244793  117024 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:03:34.244821  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:03:34.244888  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:03:34.244962  117024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:03:34.244972  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:03:34.245068  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:03:34.254397  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:03:34.282025  117024 start.go:296] duration metric: took 123.960061ms for postStartSetup
	I0826 11:03:34.282091  117024 main.go:141] libmachine: (ha-055395) Calling .GetConfigRaw
	I0826 11:03:34.282754  117024 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:03:34.286054  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.286485  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.286509  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.286858  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:03:34.287156  117024 start.go:128] duration metric: took 24.376564256s to createHost
	I0826 11:03:34.287188  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:34.289487  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.289901  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.289925  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.290240  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:34.290470  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.290605  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.290857  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:34.291072  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:03:34.291256  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:03:34.291273  117024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:03:34.399785  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670214.379153941
	
	I0826 11:03:34.399814  117024 fix.go:216] guest clock: 1724670214.379153941
	I0826 11:03:34.399826  117024 fix.go:229] Guest: 2024-08-26 11:03:34.379153941 +0000 UTC Remote: 2024-08-26 11:03:34.287172419 +0000 UTC m=+24.490698333 (delta=91.981522ms)
	I0826 11:03:34.399860  117024 fix.go:200] guest clock delta is within tolerance: 91.981522ms
	I0826 11:03:34.399866  117024 start.go:83] releasing machines lock for "ha-055395", held for 24.489372546s
	I0826 11:03:34.399890  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.400237  117024 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:03:34.403050  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.403499  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.403521  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.403654  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.404229  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.404430  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:34.404511  117024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:03:34.404557  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:34.404690  117024 ssh_runner.go:195] Run: cat /version.json
	I0826 11:03:34.404716  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:34.407489  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.407653  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.407867  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.407903  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.407936  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:34.407952  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:34.408069  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:34.408299  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:34.408332  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.408558  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:34.408559  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:34.408794  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:34.408775  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:34.408963  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:34.527543  117024 ssh_runner.go:195] Run: systemctl --version
	I0826 11:03:34.533890  117024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:03:34.692657  117024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:03:34.698640  117024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:03:34.698717  117024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:03:34.715052  117024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:03:34.715086  117024 start.go:495] detecting cgroup driver to use...
	I0826 11:03:34.715157  117024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:03:34.730592  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:03:34.744714  117024 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:03:34.744793  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:03:34.758226  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:03:34.771923  117024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:03:34.887947  117024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:03:35.035349  117024 docker.go:233] disabling docker service ...
	I0826 11:03:35.035417  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:03:35.049879  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:03:35.062408  117024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:03:35.193889  117024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:03:35.329732  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:03:35.342913  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:03:35.360253  117024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:03:35.360322  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.370813  117024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:03:35.370900  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.381074  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.392635  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.403367  117024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:03:35.414733  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.426584  117024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.443776  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:03:35.453992  117024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:03:35.463419  117024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:03:35.463497  117024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:03:35.477269  117024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:03:35.487183  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:03:35.609378  117024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:03:35.740451  117024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:03:35.740543  117024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:03:35.745507  117024 start.go:563] Will wait 60s for crictl version
	I0826 11:03:35.745610  117024 ssh_runner.go:195] Run: which crictl
	I0826 11:03:35.749251  117024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:03:35.787232  117024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:03:35.787327  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:03:35.815315  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:03:35.844399  117024 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:03:35.846146  117024 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:03:35.848989  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:35.849355  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:35.849383  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:35.849674  117024 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:03:35.853588  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:03:35.865877  117024 kubeadm.go:883] updating cluster {Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:03:35.865989  117024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:03:35.866043  117024 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:03:35.897173  117024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 11:03:35.897253  117024 ssh_runner.go:195] Run: which lz4
	I0826 11:03:35.901041  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0826 11:03:35.901171  117024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 11:03:35.905185  117024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 11:03:35.905229  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 11:03:37.198330  117024 crio.go:462] duration metric: took 1.297194802s to copy over tarball
	I0826 11:03:37.198412  117024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 11:03:39.276677  117024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.078236047s)
	I0826 11:03:39.276711  117024 crio.go:469] duration metric: took 2.078346989s to extract the tarball
	I0826 11:03:39.276722  117024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 11:03:39.313763  117024 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:03:39.359702  117024 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:03:39.359732  117024 cache_images.go:84] Images are preloaded, skipping loading
	I0826 11:03:39.359745  117024 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.31.0 crio true true} ...
	I0826 11:03:39.359904  117024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-055395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:03:39.359999  117024 ssh_runner.go:195] Run: crio config
	I0826 11:03:39.409301  117024 cni.go:84] Creating CNI manager for ""
	I0826 11:03:39.409333  117024 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0826 11:03:39.409347  117024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:03:39.409380  117024 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-055395 NodeName:ha-055395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 11:03:39.409557  117024 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-055395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:03:39.409585  117024 kube-vip.go:115] generating kube-vip config ...
	I0826 11:03:39.409641  117024 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0826 11:03:39.427739  117024 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0826 11:03:39.427853  117024 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0826 11:03:39.427919  117024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:03:39.437860  117024 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:03:39.437948  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0826 11:03:39.447555  117024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0826 11:03:39.463924  117024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:03:39.480746  117024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0826 11:03:39.497403  117024 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0826 11:03:39.514189  117024 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0826 11:03:39.517948  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:03:39.529999  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:03:39.648543  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:03:39.665059  117024 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395 for IP: 192.168.39.150
	I0826 11:03:39.665089  117024 certs.go:194] generating shared ca certs ...
	I0826 11:03:39.665108  117024 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.665299  117024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:03:39.665356  117024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:03:39.665369  117024 certs.go:256] generating profile certs ...
	I0826 11:03:39.665445  117024 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key
	I0826 11:03:39.665478  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt with IP's: []
	I0826 11:03:39.853443  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt ...
	I0826 11:03:39.853479  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt: {Name:mkc397b1a38dbc1647b20007cc4550ac4c76cb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.853664  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key ...
	I0826 11:03:39.853675  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key: {Name:mkef63b3342f1a90a16a5cf40496e63ab5aa7002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.853752  117024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.f7c186aa
	I0826 11:03:39.853766  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.f7c186aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.254]
	I0826 11:03:39.961173  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.f7c186aa ...
	I0826 11:03:39.961217  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.f7c186aa: {Name:mk6de53fc57d5a4578e426a8fda2cbc0e119c40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.961393  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.f7c186aa ...
	I0826 11:03:39.961408  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.f7c186aa: {Name:mkf6d833d9635569571577746e5e1109a1cf347f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:39.961476  117024 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.f7c186aa -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt
	I0826 11:03:39.961607  117024 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.f7c186aa -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key
	I0826 11:03:39.961667  117024 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key
	I0826 11:03:39.961684  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt with IP's: []
	I0826 11:03:40.078200  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt ...
	I0826 11:03:40.078240  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt: {Name:mk33ca7bddb8f75ee337ba852e63f18daa5f2c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:40.078430  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key ...
	I0826 11:03:40.078443  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key: {Name:mk1bf6df6decfe2222d191672ac8677c0385a9fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:40.078521  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:03:40.078547  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:03:40.078564  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:03:40.078578  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:03:40.078592  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:03:40.078607  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:03:40.078620  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:03:40.078632  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:03:40.078692  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:03:40.078731  117024 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:03:40.078745  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:03:40.078769  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:03:40.078796  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:03:40.078823  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:03:40.078885  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:03:40.078914  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:03:40.078934  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:03:40.078949  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:03:40.079588  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:03:40.104793  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:03:40.127468  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:03:40.150072  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:03:40.172631  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 11:03:40.195723  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 11:03:40.218225  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:03:40.240516  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:03:40.262956  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:03:40.285934  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:03:40.308626  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:03:40.331138  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:03:40.347251  117024 ssh_runner.go:195] Run: openssl version
	I0826 11:03:40.352639  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:03:40.362798  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:03:40.366922  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:03:40.366975  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:03:40.372443  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:03:40.383218  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:03:40.393973  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:03:40.398597  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:03:40.398679  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:03:40.404284  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:03:40.415301  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:03:40.429673  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:03:40.434681  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:03:40.434762  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:03:40.441456  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:03:40.454176  117024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:03:40.461169  117024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 11:03:40.461255  117024 kubeadm.go:392] StartCluster: {Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:03:40.461356  117024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:03:40.461426  117024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:03:40.516064  117024 cri.go:89] found id: ""
	I0826 11:03:40.516165  117024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 11:03:40.526222  117024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 11:03:40.535869  117024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 11:03:40.545763  117024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 11:03:40.545788  117024 kubeadm.go:157] found existing configuration files:
	
	I0826 11:03:40.545844  117024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 11:03:40.555213  117024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 11:03:40.555301  117024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 11:03:40.565112  117024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 11:03:40.574562  117024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 11:03:40.574662  117024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 11:03:40.584245  117024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 11:03:40.593223  117024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 11:03:40.593296  117024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 11:03:40.602877  117024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 11:03:40.612049  117024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 11:03:40.612126  117024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 11:03:40.621273  117024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 11:03:40.725441  117024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 11:03:40.725600  117024 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 11:03:40.816670  117024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 11:03:40.816813  117024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 11:03:40.816995  117024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 11:03:40.826481  117024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 11:03:40.846502  117024 out.go:235]   - Generating certificates and keys ...
	I0826 11:03:40.846634  117024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 11:03:40.846702  117024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 11:03:41.055404  117024 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0826 11:03:41.169930  117024 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0826 11:03:41.344289  117024 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0826 11:03:41.612958  117024 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0826 11:03:41.777675  117024 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0826 11:03:41.777838  117024 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-055395 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0826 11:03:42.045956  117024 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0826 11:03:42.046165  117024 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-055395 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0826 11:03:42.219563  117024 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0826 11:03:42.366975  117024 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0826 11:03:42.434860  117024 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0826 11:03:42.434957  117024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 11:03:42.700092  117024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 11:03:42.881338  117024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 11:03:43.096762  117024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 11:03:43.319011  117024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 11:03:43.375586  117024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 11:03:43.376129  117024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 11:03:43.380586  117024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 11:03:43.458697  117024 out.go:235]   - Booting up control plane ...
	I0826 11:03:43.458888  117024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 11:03:43.459052  117024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 11:03:43.459158  117024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 11:03:43.459309  117024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 11:03:43.459455  117024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 11:03:43.459521  117024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 11:03:43.551735  117024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 11:03:43.551858  117024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 11:03:44.552521  117024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001356152s
	I0826 11:03:44.552618  117024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 11:03:50.224303  117024 kubeadm.go:310] [api-check] The API server is healthy after 5.67445267s
	I0826 11:03:50.237911  117024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 11:03:50.263772  117024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 11:03:50.807085  117024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 11:03:50.807295  117024 kubeadm.go:310] [mark-control-plane] Marking the node ha-055395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 11:03:50.820746  117024 kubeadm.go:310] [bootstrap-token] Using token: pkf7iv.zgxj01v83wryjd35
	I0826 11:03:50.822481  117024 out.go:235]   - Configuring RBAC rules ...
	I0826 11:03:50.822621  117024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 11:03:50.832725  117024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 11:03:50.841787  117024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 11:03:50.846377  117024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 11:03:50.850960  117024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 11:03:50.855809  117024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 11:03:50.872409  117024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 11:03:51.150143  117024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 11:03:51.632447  117024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 11:03:51.632473  117024 kubeadm.go:310] 
	I0826 11:03:51.632527  117024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 11:03:51.632531  117024 kubeadm.go:310] 
	I0826 11:03:51.632656  117024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 11:03:51.632668  117024 kubeadm.go:310] 
	I0826 11:03:51.632695  117024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 11:03:51.632809  117024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 11:03:51.632894  117024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 11:03:51.632902  117024 kubeadm.go:310] 
	I0826 11:03:51.632943  117024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 11:03:51.632949  117024 kubeadm.go:310] 
	I0826 11:03:51.633001  117024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 11:03:51.633024  117024 kubeadm.go:310] 
	I0826 11:03:51.633067  117024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 11:03:51.633154  117024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 11:03:51.633256  117024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 11:03:51.633266  117024 kubeadm.go:310] 
	I0826 11:03:51.633373  117024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 11:03:51.633484  117024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 11:03:51.633495  117024 kubeadm.go:310] 
	I0826 11:03:51.633602  117024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pkf7iv.zgxj01v83wryjd35 \
	I0826 11:03:51.633728  117024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 11:03:51.633768  117024 kubeadm.go:310] 	--control-plane 
	I0826 11:03:51.633775  117024 kubeadm.go:310] 
	I0826 11:03:51.633844  117024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 11:03:51.633850  117024 kubeadm.go:310] 
	I0826 11:03:51.633917  117024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pkf7iv.zgxj01v83wryjd35 \
	I0826 11:03:51.634004  117024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 11:03:51.634819  117024 kubeadm.go:310] W0826 11:03:40.707678     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 11:03:51.635147  117024 kubeadm.go:310] W0826 11:03:40.708593     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 11:03:51.635289  117024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 11:03:51.635334  117024 cni.go:84] Creating CNI manager for ""
	I0826 11:03:51.635349  117024 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0826 11:03:51.637400  117024 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0826 11:03:51.639006  117024 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0826 11:03:51.645091  117024 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0826 11:03:51.645116  117024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0826 11:03:51.666922  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0826 11:03:52.066335  117024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 11:03:52.066465  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-055395 minikube.k8s.io/updated_at=2024_08_26T11_03_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=ha-055395 minikube.k8s.io/primary=true
	I0826 11:03:52.066488  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:52.189227  117024 ops.go:34] apiserver oom_adj: -16
	I0826 11:03:52.238042  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:52.738872  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:53.238660  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:53.738325  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:54.238982  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:54.738215  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:55.239022  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:55.738912  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 11:03:55.832808  117024 kubeadm.go:1113] duration metric: took 3.766437401s to wait for elevateKubeSystemPrivileges
	I0826 11:03:55.832874  117024 kubeadm.go:394] duration metric: took 15.371615091s to StartCluster
	I0826 11:03:55.832909  117024 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:55.832991  117024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:03:55.833735  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:03:55.833973  117024 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:03:55.833987  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0826 11:03:55.834002  117024 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 11:03:55.834041  117024 addons.go:69] Setting storage-provisioner=true in profile "ha-055395"
	I0826 11:03:55.834065  117024 addons.go:234] Setting addon storage-provisioner=true in "ha-055395"
	I0826 11:03:55.834088  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:03:55.833995  117024 start.go:241] waiting for startup goroutines ...
	I0826 11:03:55.834109  117024 addons.go:69] Setting default-storageclass=true in profile "ha-055395"
	I0826 11:03:55.834146  117024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-055395"
	I0826 11:03:55.834148  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:03:55.834417  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.834465  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.834526  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.834557  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.850645  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0826 11:03:55.850816  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0826 11:03:55.851241  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.851370  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.851833  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.851857  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.851904  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.851929  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.852256  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.852317  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.852426  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:03:55.852909  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.852938  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.854651  117024 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:03:55.855019  117024 kapi.go:59] client config for ha-055395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key", CAFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0826 11:03:55.855525  117024 cert_rotation.go:140] Starting client certificate rotation controller
	I0826 11:03:55.855926  117024 addons.go:234] Setting addon default-storageclass=true in "ha-055395"
	I0826 11:03:55.855977  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:03:55.856371  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.856407  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.869749  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36025
	I0826 11:03:55.870232  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.870693  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.870713  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.871106  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.871333  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:03:55.871710  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35217
	I0826 11:03:55.872155  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.872668  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.872690  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.873028  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:55.873046  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.873632  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:55.873689  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:55.875223  117024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:03:55.876601  117024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 11:03:55.876623  117024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 11:03:55.876643  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:55.880087  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:55.880531  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:55.880555  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:55.880781  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:55.880990  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:55.881154  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:55.881308  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:55.893589  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33937
	I0826 11:03:55.894113  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:55.894624  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:55.894651  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:55.895062  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:55.895257  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:03:55.896973  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:03:55.897210  117024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 11:03:55.897224  117024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 11:03:55.897240  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:03:55.900744  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:55.901224  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:03:55.901251  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:03:55.901403  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:03:55.901602  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:03:55.901764  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:03:55.901982  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:03:56.002634  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0826 11:03:56.043456  117024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 11:03:56.066165  117024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 11:03:56.601633  117024 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0826 11:03:56.858426  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.858454  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.858534  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.858558  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.858910  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.858922  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.858924  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.858925  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.858944  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.858954  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.858962  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.858933  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.859022  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.859031  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.859161  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.859222  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.859223  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.859302  117024 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0826 11:03:56.859329  117024 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0826 11:03:56.859433  117024 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0826 11:03:56.859444  117024 round_trippers.go:469] Request Headers:
	I0826 11:03:56.859454  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:03:56.859463  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:03:56.859479  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.859435  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.859541  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.875875  117024 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0826 11:03:56.876521  117024 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0826 11:03:56.876537  117024 round_trippers.go:469] Request Headers:
	I0826 11:03:56.876544  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:03:56.876549  117024 round_trippers.go:473]     Content-Type: application/json
	I0826 11:03:56.876553  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:03:56.881215  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:03:56.881431  117024 main.go:141] libmachine: Making call to close driver server
	I0826 11:03:56.881450  117024 main.go:141] libmachine: (ha-055395) Calling .Close
	I0826 11:03:56.881766  117024 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:03:56.881785  117024 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:03:56.881785  117024 main.go:141] libmachine: (ha-055395) DBG | Closing plugin on server side
	I0826 11:03:56.883642  117024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0826 11:03:56.884883  117024 addons.go:510] duration metric: took 1.050875595s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0826 11:03:56.884934  117024 start.go:246] waiting for cluster config update ...
	I0826 11:03:56.884951  117024 start.go:255] writing updated cluster config ...
	I0826 11:03:56.886530  117024 out.go:201] 
	I0826 11:03:56.887959  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:03:56.888029  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:03:56.889488  117024 out.go:177] * Starting "ha-055395-m02" control-plane node in "ha-055395" cluster
	I0826 11:03:56.890519  117024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:03:56.890546  117024 cache.go:56] Caching tarball of preloaded images
	I0826 11:03:56.890653  117024 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:03:56.890667  117024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:03:56.890733  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:03:56.890995  117024 start.go:360] acquireMachinesLock for ha-055395-m02: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:03:56.891059  117024 start.go:364] duration metric: took 39.036µs to acquireMachinesLock for "ha-055395-m02"
	I0826 11:03:56.891085  117024 start.go:93] Provisioning new machine with config: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:03:56.891180  117024 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0826 11:03:56.892849  117024 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 11:03:56.892928  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:03:56.892956  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:03:56.908421  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0826 11:03:56.908931  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:03:56.909451  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:03:56.909474  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:03:56.909912  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:03:56.910102  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetMachineName
	I0826 11:03:56.910242  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:03:56.910395  117024 start.go:159] libmachine.API.Create for "ha-055395" (driver="kvm2")
	I0826 11:03:56.910417  117024 client.go:168] LocalClient.Create starting
	I0826 11:03:56.910446  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 11:03:56.910483  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:03:56.910498  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:03:56.910556  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 11:03:56.910576  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:03:56.910588  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:03:56.910604  117024 main.go:141] libmachine: Running pre-create checks...
	I0826 11:03:56.910612  117024 main.go:141] libmachine: (ha-055395-m02) Calling .PreCreateCheck
	I0826 11:03:56.910729  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetConfigRaw
	I0826 11:03:56.911153  117024 main.go:141] libmachine: Creating machine...
	I0826 11:03:56.911169  117024 main.go:141] libmachine: (ha-055395-m02) Calling .Create
	I0826 11:03:56.911293  117024 main.go:141] libmachine: (ha-055395-m02) Creating KVM machine...
	I0826 11:03:56.912625  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found existing default KVM network
	I0826 11:03:56.912797  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found existing private KVM network mk-ha-055395
	I0826 11:03:56.912931  117024 main.go:141] libmachine: (ha-055395-m02) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02 ...
	I0826 11:03:56.912950  117024 main.go:141] libmachine: (ha-055395-m02) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 11:03:56.913032  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:56.912933  117411 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:56.913133  117024 main.go:141] libmachine: (ha-055395-m02) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 11:03:57.178677  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:57.178502  117411 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa...
	I0826 11:03:57.355999  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:57.355865  117411 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/ha-055395-m02.rawdisk...
	I0826 11:03:57.356029  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Writing magic tar header
	I0826 11:03:57.356040  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Writing SSH key tar header
	I0826 11:03:57.356157  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:57.356040  117411 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02 ...
	I0826 11:03:57.356241  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02 (perms=drwx------)
	I0826 11:03:57.356257  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02
	I0826 11:03:57.356264  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 11:03:57.356271  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 11:03:57.356283  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:03:57.356295  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 11:03:57.356308  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 11:03:57.356319  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home/jenkins
	I0826 11:03:57.356334  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Checking permissions on dir: /home
	I0826 11:03:57.356349  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 11:03:57.356357  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Skipping /home - not owner
	I0826 11:03:57.356369  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 11:03:57.356377  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 11:03:57.356384  117024 main.go:141] libmachine: (ha-055395-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 11:03:57.356391  117024 main.go:141] libmachine: (ha-055395-m02) Creating domain...
	I0826 11:03:57.357519  117024 main.go:141] libmachine: (ha-055395-m02) define libvirt domain using xml: 
	I0826 11:03:57.357543  117024 main.go:141] libmachine: (ha-055395-m02) <domain type='kvm'>
	I0826 11:03:57.357577  117024 main.go:141] libmachine: (ha-055395-m02)   <name>ha-055395-m02</name>
	I0826 11:03:57.357594  117024 main.go:141] libmachine: (ha-055395-m02)   <memory unit='MiB'>2200</memory>
	I0826 11:03:57.357603  117024 main.go:141] libmachine: (ha-055395-m02)   <vcpu>2</vcpu>
	I0826 11:03:57.357613  117024 main.go:141] libmachine: (ha-055395-m02)   <features>
	I0826 11:03:57.357621  117024 main.go:141] libmachine: (ha-055395-m02)     <acpi/>
	I0826 11:03:57.357636  117024 main.go:141] libmachine: (ha-055395-m02)     <apic/>
	I0826 11:03:57.357655  117024 main.go:141] libmachine: (ha-055395-m02)     <pae/>
	I0826 11:03:57.357664  117024 main.go:141] libmachine: (ha-055395-m02)     
	I0826 11:03:57.357691  117024 main.go:141] libmachine: (ha-055395-m02)   </features>
	I0826 11:03:57.357711  117024 main.go:141] libmachine: (ha-055395-m02)   <cpu mode='host-passthrough'>
	I0826 11:03:57.357723  117024 main.go:141] libmachine: (ha-055395-m02)   
	I0826 11:03:57.357738  117024 main.go:141] libmachine: (ha-055395-m02)   </cpu>
	I0826 11:03:57.357747  117024 main.go:141] libmachine: (ha-055395-m02)   <os>
	I0826 11:03:57.357759  117024 main.go:141] libmachine: (ha-055395-m02)     <type>hvm</type>
	I0826 11:03:57.357772  117024 main.go:141] libmachine: (ha-055395-m02)     <boot dev='cdrom'/>
	I0826 11:03:57.357787  117024 main.go:141] libmachine: (ha-055395-m02)     <boot dev='hd'/>
	I0826 11:03:57.357799  117024 main.go:141] libmachine: (ha-055395-m02)     <bootmenu enable='no'/>
	I0826 11:03:57.357808  117024 main.go:141] libmachine: (ha-055395-m02)   </os>
	I0826 11:03:57.357816  117024 main.go:141] libmachine: (ha-055395-m02)   <devices>
	I0826 11:03:57.357827  117024 main.go:141] libmachine: (ha-055395-m02)     <disk type='file' device='cdrom'>
	I0826 11:03:57.357841  117024 main.go:141] libmachine: (ha-055395-m02)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/boot2docker.iso'/>
	I0826 11:03:57.357851  117024 main.go:141] libmachine: (ha-055395-m02)       <target dev='hdc' bus='scsi'/>
	I0826 11:03:57.357869  117024 main.go:141] libmachine: (ha-055395-m02)       <readonly/>
	I0826 11:03:57.357878  117024 main.go:141] libmachine: (ha-055395-m02)     </disk>
	I0826 11:03:57.357981  117024 main.go:141] libmachine: (ha-055395-m02)     <disk type='file' device='disk'>
	I0826 11:03:57.358026  117024 main.go:141] libmachine: (ha-055395-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 11:03:57.358046  117024 main.go:141] libmachine: (ha-055395-m02)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/ha-055395-m02.rawdisk'/>
	I0826 11:03:57.358060  117024 main.go:141] libmachine: (ha-055395-m02)       <target dev='hda' bus='virtio'/>
	I0826 11:03:57.358072  117024 main.go:141] libmachine: (ha-055395-m02)     </disk>
	I0826 11:03:57.358085  117024 main.go:141] libmachine: (ha-055395-m02)     <interface type='network'>
	I0826 11:03:57.358128  117024 main.go:141] libmachine: (ha-055395-m02)       <source network='mk-ha-055395'/>
	I0826 11:03:57.358155  117024 main.go:141] libmachine: (ha-055395-m02)       <model type='virtio'/>
	I0826 11:03:57.358165  117024 main.go:141] libmachine: (ha-055395-m02)     </interface>
	I0826 11:03:57.358173  117024 main.go:141] libmachine: (ha-055395-m02)     <interface type='network'>
	I0826 11:03:57.358180  117024 main.go:141] libmachine: (ha-055395-m02)       <source network='default'/>
	I0826 11:03:57.358185  117024 main.go:141] libmachine: (ha-055395-m02)       <model type='virtio'/>
	I0826 11:03:57.358195  117024 main.go:141] libmachine: (ha-055395-m02)     </interface>
	I0826 11:03:57.358208  117024 main.go:141] libmachine: (ha-055395-m02)     <serial type='pty'>
	I0826 11:03:57.358218  117024 main.go:141] libmachine: (ha-055395-m02)       <target port='0'/>
	I0826 11:03:57.358224  117024 main.go:141] libmachine: (ha-055395-m02)     </serial>
	I0826 11:03:57.358234  117024 main.go:141] libmachine: (ha-055395-m02)     <console type='pty'>
	I0826 11:03:57.358245  117024 main.go:141] libmachine: (ha-055395-m02)       <target type='serial' port='0'/>
	I0826 11:03:57.358260  117024 main.go:141] libmachine: (ha-055395-m02)     </console>
	I0826 11:03:57.358272  117024 main.go:141] libmachine: (ha-055395-m02)     <rng model='virtio'>
	I0826 11:03:57.358310  117024 main.go:141] libmachine: (ha-055395-m02)       <backend model='random'>/dev/random</backend>
	I0826 11:03:57.358334  117024 main.go:141] libmachine: (ha-055395-m02)     </rng>
	I0826 11:03:57.358346  117024 main.go:141] libmachine: (ha-055395-m02)     
	I0826 11:03:57.358361  117024 main.go:141] libmachine: (ha-055395-m02)     
	I0826 11:03:57.358372  117024 main.go:141] libmachine: (ha-055395-m02)   </devices>
	I0826 11:03:57.358381  117024 main.go:141] libmachine: (ha-055395-m02) </domain>
	I0826 11:03:57.358395  117024 main.go:141] libmachine: (ha-055395-m02) 
	I0826 11:03:57.365313  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:e2:d8:6e in network default
	I0826 11:03:57.365914  117024 main.go:141] libmachine: (ha-055395-m02) Ensuring networks are active...
	I0826 11:03:57.365942  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:57.366688  117024 main.go:141] libmachine: (ha-055395-m02) Ensuring network default is active
	I0826 11:03:57.367068  117024 main.go:141] libmachine: (ha-055395-m02) Ensuring network mk-ha-055395 is active
	I0826 11:03:57.367494  117024 main.go:141] libmachine: (ha-055395-m02) Getting domain xml...
	I0826 11:03:57.368172  117024 main.go:141] libmachine: (ha-055395-m02) Creating domain...
	I0826 11:03:58.586476  117024 main.go:141] libmachine: (ha-055395-m02) Waiting to get IP...
	I0826 11:03:58.587260  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:58.587652  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:58.587674  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:58.587630  117411 retry.go:31] will retry after 235.776027ms: waiting for machine to come up
	I0826 11:03:58.825143  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:58.825716  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:58.825747  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:58.825675  117411 retry.go:31] will retry after 269.486383ms: waiting for machine to come up
	I0826 11:03:59.097093  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:59.097562  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:59.097597  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:59.097517  117411 retry.go:31] will retry after 427.352721ms: waiting for machine to come up
	I0826 11:03:59.526343  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:59.526897  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:59.526932  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:59.526871  117411 retry.go:31] will retry after 411.230052ms: waiting for machine to come up
	I0826 11:03:59.939173  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:03:59.939687  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:03:59.939718  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:03:59.939636  117411 retry.go:31] will retry after 699.606269ms: waiting for machine to come up
	I0826 11:04:00.640504  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:00.641135  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:00.641165  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:00.641073  117411 retry.go:31] will retry after 906.425603ms: waiting for machine to come up
	I0826 11:04:01.549180  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:01.549749  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:01.549835  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:01.549724  117411 retry.go:31] will retry after 1.180965246s: waiting for machine to come up
	I0826 11:04:02.732557  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:02.733074  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:02.733112  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:02.733019  117411 retry.go:31] will retry after 937.830995ms: waiting for machine to come up
	I0826 11:04:03.671965  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:03.672355  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:03.672377  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:03.672311  117411 retry.go:31] will retry after 1.614048809s: waiting for machine to come up
	I0826 11:04:05.289158  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:05.289646  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:05.289671  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:05.289570  117411 retry.go:31] will retry after 1.660352387s: waiting for machine to come up
	I0826 11:04:06.951776  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:06.952237  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:06.952281  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:06.952117  117411 retry.go:31] will retry after 2.116784544s: waiting for machine to come up
	I0826 11:04:09.071540  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:09.072018  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:09.072043  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:09.071942  117411 retry.go:31] will retry after 3.356650421s: waiting for machine to come up
	I0826 11:04:12.429954  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:12.430444  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:12.430474  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:12.430409  117411 retry.go:31] will retry after 3.216911436s: waiting for machine to come up
	I0826 11:04:15.648479  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:15.648901  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find current IP address of domain ha-055395-m02 in network mk-ha-055395
	I0826 11:04:15.648924  117024 main.go:141] libmachine: (ha-055395-m02) DBG | I0826 11:04:15.648860  117411 retry.go:31] will retry after 4.040420472s: waiting for machine to come up
	I0826 11:04:19.692722  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.693185  117024 main.go:141] libmachine: (ha-055395-m02) Found IP for machine: 192.168.39.55
	I0826 11:04:19.693210  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has current primary IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.693216  117024 main.go:141] libmachine: (ha-055395-m02) Reserving static IP address...
	I0826 11:04:19.693567  117024 main.go:141] libmachine: (ha-055395-m02) DBG | unable to find host DHCP lease matching {name: "ha-055395-m02", mac: "52:54:00:5f:d6:56", ip: "192.168.39.55"} in network mk-ha-055395
	I0826 11:04:19.781117  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Getting to WaitForSSH function...
	I0826 11:04:19.781148  117024 main.go:141] libmachine: (ha-055395-m02) Reserved static IP address: 192.168.39.55
	I0826 11:04:19.781161  117024 main.go:141] libmachine: (ha-055395-m02) Waiting for SSH to be available...
	I0826 11:04:19.784367  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.784768  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:19.784795  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.784974  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Using SSH client type: external
	I0826 11:04:19.784999  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa (-rw-------)
	I0826 11:04:19.785030  117024 main.go:141] libmachine: (ha-055395-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:04:19.785043  117024 main.go:141] libmachine: (ha-055395-m02) DBG | About to run SSH command:
	I0826 11:04:19.785064  117024 main.go:141] libmachine: (ha-055395-m02) DBG | exit 0
	I0826 11:04:19.915229  117024 main.go:141] libmachine: (ha-055395-m02) DBG | SSH cmd err, output: <nil>: 
	I0826 11:04:19.915559  117024 main.go:141] libmachine: (ha-055395-m02) KVM machine creation complete!
	I0826 11:04:19.915873  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetConfigRaw
	I0826 11:04:19.916417  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:19.916675  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:19.916865  117024 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 11:04:19.916883  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:04:19.918440  117024 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 11:04:19.918459  117024 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 11:04:19.918465  117024 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 11:04:19.918471  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:19.920873  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.921334  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:19.921356  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:19.921499  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:19.921706  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:19.921870  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:19.922008  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:19.922142  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:19.922384  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:19.922398  117024 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 11:04:20.038102  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:04:20.038125  117024 main.go:141] libmachine: Detecting the provisioner...
	I0826 11:04:20.038136  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.041029  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.041452  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.041479  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.041658  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.041929  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.042119  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.042346  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.042520  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:20.042736  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:20.042754  117024 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 11:04:20.155301  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 11:04:20.155391  117024 main.go:141] libmachine: found compatible host: buildroot
	I0826 11:04:20.155404  117024 main.go:141] libmachine: Provisioning with buildroot...
	I0826 11:04:20.155412  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetMachineName
	I0826 11:04:20.155683  117024 buildroot.go:166] provisioning hostname "ha-055395-m02"
	I0826 11:04:20.155714  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetMachineName
	I0826 11:04:20.155950  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.158677  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.159089  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.159115  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.159260  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.159461  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.159648  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.159832  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.160036  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:20.160211  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:20.160224  117024 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-055395-m02 && echo "ha-055395-m02" | sudo tee /etc/hostname
	I0826 11:04:20.288938  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395-m02
	
	I0826 11:04:20.288967  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.291507  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.291844  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.291875  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.292018  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.292221  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.292406  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.292583  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.292738  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:20.292903  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:20.292922  117024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-055395-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-055395-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-055395-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:04:20.415598  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:04:20.415634  117024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:04:20.415668  117024 buildroot.go:174] setting up certificates
	I0826 11:04:20.415682  117024 provision.go:84] configureAuth start
	I0826 11:04:20.415697  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetMachineName
	I0826 11:04:20.416038  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:04:20.418919  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.419439  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.419471  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.419648  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.422258  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.422678  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.422708  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.422942  117024 provision.go:143] copyHostCerts
	I0826 11:04:20.422981  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:04:20.423021  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:04:20.423030  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:04:20.423098  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:04:20.423170  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:04:20.423187  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:04:20.423194  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:04:20.423216  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:04:20.423312  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:04:20.423332  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:04:20.423339  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:04:20.423364  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:04:20.423415  117024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.ha-055395-m02 san=[127.0.0.1 192.168.39.55 ha-055395-m02 localhost minikube]
	I0826 11:04:20.503018  117024 provision.go:177] copyRemoteCerts
	I0826 11:04:20.503077  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:04:20.503104  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.505923  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.506307  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.506345  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.506622  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.506925  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.507112  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.507286  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:04:20.592967  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:04:20.593046  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:04:20.619679  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:04:20.619755  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0826 11:04:20.644651  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:04:20.644725  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:04:20.667901  117024 provision.go:87] duration metric: took 252.203794ms to configureAuth
	I0826 11:04:20.667931  117024 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:04:20.668106  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:04:20.668216  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.670977  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.671395  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.671433  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.671752  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.672005  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.672211  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.672415  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.672608  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:20.672844  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:20.672878  117024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:04:20.948815  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:04:20.948861  117024 main.go:141] libmachine: Checking connection to Docker...
	I0826 11:04:20.948873  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetURL
	I0826 11:04:20.950251  117024 main.go:141] libmachine: (ha-055395-m02) DBG | Using libvirt version 6000000
	I0826 11:04:20.952436  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.952776  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.952807  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.952997  117024 main.go:141] libmachine: Docker is up and running!
	I0826 11:04:20.953021  117024 main.go:141] libmachine: Reticulating splines...
	I0826 11:04:20.953030  117024 client.go:171] duration metric: took 24.042605537s to LocalClient.Create
	I0826 11:04:20.953060  117024 start.go:167] duration metric: took 24.042663921s to libmachine.API.Create "ha-055395"
	I0826 11:04:20.953073  117024 start.go:293] postStartSetup for "ha-055395-m02" (driver="kvm2")
	I0826 11:04:20.953088  117024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:04:20.953113  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:20.953361  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:04:20.953392  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:20.955636  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.955962  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:20.955989  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:20.956118  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:20.956321  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:20.956465  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:20.956602  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:04:21.040754  117024 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:04:21.044756  117024 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:04:21.044795  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:04:21.044880  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:04:21.044975  117024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:04:21.044989  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:04:21.045101  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:04:21.054381  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:04:21.079000  117024 start.go:296] duration metric: took 125.909237ms for postStartSetup
	I0826 11:04:21.079062  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetConfigRaw
	I0826 11:04:21.079683  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:04:21.082204  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.082539  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.082570  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.082859  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:04:21.083097  117024 start.go:128] duration metric: took 24.191904547s to createHost
	I0826 11:04:21.083127  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:21.085311  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.085611  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.085640  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.085787  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:21.086000  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:21.086143  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:21.086286  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:21.086429  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:04:21.086612  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0826 11:04:21.086626  117024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:04:21.199436  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670261.177401120
	
	I0826 11:04:21.199477  117024 fix.go:216] guest clock: 1724670261.177401120
	I0826 11:04:21.199490  117024 fix.go:229] Guest: 2024-08-26 11:04:21.17740112 +0000 UTC Remote: 2024-08-26 11:04:21.083111953 +0000 UTC m=+71.286637863 (delta=94.289167ms)
	I0826 11:04:21.199519  117024 fix.go:200] guest clock delta is within tolerance: 94.289167ms
	I0826 11:04:21.199528  117024 start.go:83] releasing machines lock for "ha-055395-m02", held for 24.308458499s
	I0826 11:04:21.199551  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:21.199905  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:04:21.202606  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.202979  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.203011  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.205551  117024 out.go:177] * Found network options:
	I0826 11:04:21.207306  117024 out.go:177]   - NO_PROXY=192.168.39.150
	W0826 11:04:21.208816  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0826 11:04:21.208855  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:21.209465  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:21.209714  117024 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:04:21.209822  117024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:04:21.209879  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	W0826 11:04:21.209975  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0826 11:04:21.210049  117024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:04:21.210069  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:04:21.212915  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.213120  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.213267  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.213306  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.213503  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:21.213735  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:21.213736  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:21.213767  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:21.213903  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:21.214034  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:04:21.214110  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:04:21.214194  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:04:21.214320  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:04:21.214462  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:04:21.450828  117024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:04:21.457231  117024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:04:21.457318  117024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:04:21.472675  117024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:04:21.472709  117024 start.go:495] detecting cgroup driver to use...
	I0826 11:04:21.472794  117024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:04:21.488170  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:04:21.501938  117024 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:04:21.502010  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:04:21.515554  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:04:21.536633  117024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:04:21.651112  117024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:04:21.814641  117024 docker.go:233] disabling docker service ...
	I0826 11:04:21.814737  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:04:21.829435  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:04:21.843451  117024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:04:21.966209  117024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:04:22.100363  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:04:22.114335  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:04:22.133049  117024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:04:22.133127  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.143659  117024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:04:22.143745  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.154541  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.165107  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.175808  117024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:04:22.186717  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.197109  117024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.214180  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:04:22.224402  117024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:04:22.233575  117024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:04:22.233633  117024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:04:22.245348  117024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:04:22.254931  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:04:22.376465  117024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:04:22.511044  117024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:04:22.511137  117024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:04:22.516213  117024 start.go:563] Will wait 60s for crictl version
	I0826 11:04:22.516278  117024 ssh_runner.go:195] Run: which crictl
	I0826 11:04:22.519857  117024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:04:22.558773  117024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:04:22.558878  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:04:22.586918  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:04:22.614172  117024 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:04:22.615863  117024 out.go:177]   - env NO_PROXY=192.168.39.150
	I0826 11:04:22.616968  117024 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:04:22.619594  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:22.619939  117024 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:04:10 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:04:22.619968  117024 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:04:22.620182  117024 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:04:22.624219  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:04:22.636424  117024 mustload.go:65] Loading cluster: ha-055395
	I0826 11:04:22.636648  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:04:22.636947  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:04:22.636978  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:04:22.653019  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0826 11:04:22.653445  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:04:22.653979  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:04:22.654003  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:04:22.654293  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:04:22.654451  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:04:22.656162  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:04:22.656466  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:04:22.656494  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:04:22.672944  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0826 11:04:22.673387  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:04:22.673904  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:04:22.673927  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:04:22.674288  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:04:22.674532  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:04:22.674696  117024 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395 for IP: 192.168.39.55
	I0826 11:04:22.674707  117024 certs.go:194] generating shared ca certs ...
	I0826 11:04:22.674729  117024 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:04:22.674916  117024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:04:22.674975  117024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:04:22.674990  117024 certs.go:256] generating profile certs ...
	I0826 11:04:22.675079  117024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key
	I0826 11:04:22.675113  117024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.2c989aee
	I0826 11:04:22.675135  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.2c989aee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.55 192.168.39.254]
	I0826 11:04:22.976698  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.2c989aee ...
	I0826 11:04:22.976739  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.2c989aee: {Name:mkeb2908f5b47e6d9f85b9f602bb10303a420458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:04:22.976948  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.2c989aee ...
	I0826 11:04:22.976967  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.2c989aee: {Name:mk9f231c451e39cdf747da04fd51f79cf7ff682c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:04:22.977074  117024 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.2c989aee -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt
	I0826 11:04:22.977234  117024 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.2c989aee -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key
	I0826 11:04:22.977398  117024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key
	I0826 11:04:22.977420  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:04:22.977439  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:04:22.977460  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:04:22.977479  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:04:22.977497  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:04:22.977515  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:04:22.977540  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:04:22.977564  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:04:22.977628  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:04:22.977668  117024 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:04:22.977683  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:04:22.977719  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:04:22.977751  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:04:22.977784  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:04:22.977838  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:04:22.977875  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:04:22.977895  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:04:22.977914  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:04:22.977959  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:04:22.981395  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:04:22.981666  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:04:22.981700  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:04:22.981908  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:04:22.982119  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:04:22.982277  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:04:22.982451  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:04:23.055349  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0826 11:04:23.060341  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0826 11:04:23.073292  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0826 11:04:23.077557  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0826 11:04:23.088490  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0826 11:04:23.092368  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0826 11:04:23.103218  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0826 11:04:23.107202  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0826 11:04:23.117560  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0826 11:04:23.121518  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0826 11:04:23.132215  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0826 11:04:23.136539  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0826 11:04:23.147116  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:04:23.171409  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:04:23.194403  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:04:23.218506  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:04:23.242215  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0826 11:04:23.267411  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 11:04:23.293022  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:04:23.317897  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:04:23.342271  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:04:23.367334  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:04:23.393316  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:04:23.419977  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0826 11:04:23.438309  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0826 11:04:23.456566  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0826 11:04:23.473775  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0826 11:04:23.490302  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0826 11:04:23.506585  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0826 11:04:23.522954  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0826 11:04:23.539793  117024 ssh_runner.go:195] Run: openssl version
	I0826 11:04:23.545182  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:04:23.556023  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:04:23.560362  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:04:23.560421  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:04:23.566159  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:04:23.576639  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:04:23.587107  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:04:23.591447  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:04:23.591531  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:04:23.597431  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:04:23.608465  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:04:23.619554  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:04:23.624141  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:04:23.624224  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:04:23.630571  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:04:23.644543  117024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:04:23.648946  117024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 11:04:23.649016  117024 kubeadm.go:934] updating node {m02 192.168.39.55 8443 v1.31.0 crio true true} ...
	I0826 11:04:23.649106  117024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-055395-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:04:23.649133  117024 kube-vip.go:115] generating kube-vip config ...
	I0826 11:04:23.649178  117024 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0826 11:04:23.666133  117024 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0826 11:04:23.666229  117024 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0826 11:04:23.666291  117024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:04:23.676935  117024 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0826 11:04:23.677018  117024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0826 11:04:23.687068  117024 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0826 11:04:23.687097  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0826 11:04:23.687153  117024 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0826 11:04:23.687165  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0826 11:04:23.687181  117024 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0826 11:04:23.692261  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0826 11:04:23.692318  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0826 11:04:24.574583  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0826 11:04:24.574668  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0826 11:04:24.580440  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0826 11:04:24.580492  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0826 11:04:24.793518  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:04:24.834011  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0826 11:04:24.834141  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0826 11:04:24.841051  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0826 11:04:24.841112  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0826 11:04:25.165316  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0826 11:04:25.174892  117024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0826 11:04:25.190811  117024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:04:25.206605  117024 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0826 11:04:25.222691  117024 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0826 11:04:25.226482  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:04:25.238149  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:04:25.353026  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:04:25.369617  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:04:25.370119  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:04:25.370166  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:04:25.386372  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0826 11:04:25.386895  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:04:25.387386  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:04:25.387415  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:04:25.387809  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:04:25.388059  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:04:25.388270  117024 start.go:317] joinCluster: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:04:25.388403  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0826 11:04:25.388426  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:04:25.391396  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:04:25.391851  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:04:25.391879  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:04:25.392055  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:04:25.392326  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:04:25.392509  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:04:25.392691  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:04:25.535560  117024 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:04:25.535616  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token heb248.n7ez3d7n5wzk63lz --discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-055395-m02 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443"
	I0826 11:04:47.746497  117024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token heb248.n7ez3d7n5wzk63lz --discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-055395-m02 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443": (22.210841711s)
	I0826 11:04:47.746559  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0826 11:04:48.284464  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-055395-m02 minikube.k8s.io/updated_at=2024_08_26T11_04_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=ha-055395 minikube.k8s.io/primary=false
	I0826 11:04:48.440122  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-055395-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0826 11:04:48.547073  117024 start.go:319] duration metric: took 23.158795151s to joinCluster
	I0826 11:04:48.547165  117024 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:04:48.547518  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:04:48.548765  117024 out.go:177] * Verifying Kubernetes components...
	I0826 11:04:48.549939  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:04:48.804434  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:04:48.860158  117024 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:04:48.860433  117024 kapi.go:59] client config for ha-055395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key", CAFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0826 11:04:48.860510  117024 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.150:8443
	I0826 11:04:48.860780  117024 node_ready.go:35] waiting up to 6m0s for node "ha-055395-m02" to be "Ready" ...
	I0826 11:04:48.860903  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:48.860913  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:48.860925  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:48.860935  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:48.871060  117024 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0826 11:04:49.361064  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:49.361089  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:49.361099  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:49.361106  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:49.367294  117024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0826 11:04:49.861822  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:49.861860  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:49.861871  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:49.861879  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:49.868555  117024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0826 11:04:50.361911  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:50.361937  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:50.361949  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:50.361955  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:50.366981  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:04:50.861200  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:50.861224  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:50.861232  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:50.861237  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:50.864986  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:50.865469  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:04:51.361694  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:51.361715  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:51.361724  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:51.361729  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:51.365282  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:51.861230  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:51.861255  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:51.861264  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:51.861267  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:51.864976  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:52.361402  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:52.361433  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:52.361445  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:52.361452  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:52.366098  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:04:52.861904  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:52.861935  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:52.861946  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:52.861952  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:52.865462  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:52.865917  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:04:53.361313  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:53.361338  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:53.361345  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:53.361349  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:53.364973  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:53.861451  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:53.861476  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:53.861484  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:53.861488  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:53.865244  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:54.361383  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:54.361410  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:54.361422  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:54.361428  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:54.364518  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:54.861666  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:54.861689  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:54.861698  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:54.861704  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:54.865738  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:04:54.866477  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:04:55.361095  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:55.361119  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:55.361127  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:55.361131  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:55.364567  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:55.861780  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:55.861811  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:55.861822  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:55.861829  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:55.866393  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:04:56.361782  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:56.361811  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:56.361819  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:56.361822  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:56.365252  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:56.861287  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:56.861318  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:56.861330  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:56.861337  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:56.864714  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:57.361948  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:57.361972  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:57.361981  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:57.361986  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:57.365912  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:57.366593  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:04:57.861888  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:57.861915  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:57.861925  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:57.861930  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:57.865634  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:58.361904  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:58.361931  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:58.361941  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:58.361945  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:58.365667  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:58.861853  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:58.861891  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:58.861900  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:58.861907  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:58.865726  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:59.361793  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:59.361823  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:59.361834  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:59.361840  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:59.365832  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:59.861260  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:04:59.861285  117024 round_trippers.go:469] Request Headers:
	I0826 11:04:59.861294  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:04:59.861299  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:04:59.864892  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:04:59.865370  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:05:00.361267  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:00.361291  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:00.361299  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:00.361305  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:00.365438  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:00.861087  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:00.861114  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:00.861122  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:00.861126  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:00.864819  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:01.361902  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:01.361926  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:01.361936  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:01.361940  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:01.369857  117024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0826 11:05:01.861803  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:01.861828  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:01.861844  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:01.861848  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:01.871050  117024 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0826 11:05:01.871847  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:05:02.361608  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:02.361633  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:02.361642  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:02.361648  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:02.365064  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:02.861963  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:02.861990  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:02.862000  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:02.862006  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:02.865660  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:03.361732  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:03.361755  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:03.361764  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:03.361768  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:03.364737  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:03.861552  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:03.861601  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:03.861614  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:03.861621  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:03.865170  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:04.361105  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:04.361133  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:04.361145  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:04.361152  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:04.368710  117024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0826 11:05:04.369243  117024 node_ready.go:53] node "ha-055395-m02" has status "Ready":"False"
	I0826 11:05:04.861827  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:04.861851  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:04.861859  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:04.861871  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:04.865949  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:05.361138  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:05.361173  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.361182  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.361187  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.365128  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:05.861020  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:05.861043  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.861050  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.861055  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.864793  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:05.865497  117024 node_ready.go:49] node "ha-055395-m02" has status "Ready":"True"
	I0826 11:05:05.865520  117024 node_ready.go:38] duration metric: took 17.004719825s for node "ha-055395-m02" to be "Ready" ...
	I0826 11:05:05.865530  117024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:05:05.865650  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:05.865664  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.865672  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.865675  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.870300  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:05.876702  117024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.876824  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-l9bd4
	I0826 11:05:05.876838  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.876849  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.876853  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.879865  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.880686  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:05.880712  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.880724  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.880733  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.883394  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.883987  117024 pod_ready.go:93] pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:05.884013  117024 pod_ready.go:82] duration metric: took 7.283098ms for pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.884025  117024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.884102  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nxb7s
	I0826 11:05:05.884111  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.884118  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.884121  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.889711  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:05:05.890322  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:05.890337  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.890346  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.890350  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.892694  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.893279  117024 pod_ready.go:93] pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:05.893299  117024 pod_ready.go:82] duration metric: took 9.266073ms for pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.893309  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.893362  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395
	I0826 11:05:05.893369  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.893376  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.893382  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.895591  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.896319  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:05.896337  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.896344  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.896347  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.898519  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.899070  117024 pod_ready.go:93] pod "etcd-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:05.899092  117024 pod_ready.go:82] duration metric: took 5.777255ms for pod "etcd-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.899101  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.899154  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395-m02
	I0826 11:05:05.899161  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.899169  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.899172  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.901532  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.902187  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:05.902203  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:05.902210  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:05.902213  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:05.904416  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:05:05.904939  117024 pod_ready.go:93] pod "etcd-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:05.904963  117024 pod_ready.go:82] duration metric: took 5.854431ms for pod "etcd-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:05.904981  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.061433  117024 request.go:632] Waited for 156.35745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395
	I0826 11:05:06.061501  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395
	I0826 11:05:06.061506  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.061514  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.061519  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.065047  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:06.261049  117024 request.go:632] Waited for 195.314476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:06.261148  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:06.261158  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.261166  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.261170  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.264280  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:06.264795  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:06.264819  117024 pod_ready.go:82] duration metric: took 359.824941ms for pod "kube-apiserver-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.264833  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.461954  117024 request.go:632] Waited for 197.042196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m02
	I0826 11:05:06.462020  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m02
	I0826 11:05:06.462025  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.462033  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.462036  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.466440  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:06.661712  117024 request.go:632] Waited for 194.398891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:06.661794  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:06.661808  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.661823  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.661833  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.665283  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:06.665829  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:06.665851  117024 pod_ready.go:82] duration metric: took 401.010339ms for pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.665864  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:06.861966  117024 request.go:632] Waited for 196.012019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395
	I0826 11:05:06.862037  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395
	I0826 11:05:06.862045  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:06.862055  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:06.862061  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:06.865261  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.061325  117024 request.go:632] Waited for 195.388402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:07.061387  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:07.061392  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.061400  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.061404  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.064536  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.065236  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:07.065255  117024 pod_ready.go:82] duration metric: took 399.384546ms for pod "kube-controller-manager-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.065265  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.261470  117024 request.go:632] Waited for 196.113192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m02
	I0826 11:05:07.261554  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m02
	I0826 11:05:07.261560  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.261568  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.261573  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.267347  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:05:07.461377  117024 request.go:632] Waited for 193.362458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:07.461461  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:07.461467  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.461476  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.461481  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.464748  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.465458  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:07.465484  117024 pod_ready.go:82] duration metric: took 400.213326ms for pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.465496  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g45pb" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.661597  117024 request.go:632] Waited for 195.989071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g45pb
	I0826 11:05:07.661665  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g45pb
	I0826 11:05:07.661672  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.661682  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.661687  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.665479  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.861508  117024 request.go:632] Waited for 195.342602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:07.861590  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:07.861596  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:07.861603  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:07.861609  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:07.865114  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:07.865765  117024 pod_ready.go:93] pod "kube-proxy-g45pb" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:07.865792  117024 pod_ready.go:82] duration metric: took 400.284091ms for pod "kube-proxy-g45pb" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:07.865808  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zl5bm" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.061829  117024 request.go:632] Waited for 195.942501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zl5bm
	I0826 11:05:08.061902  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zl5bm
	I0826 11:05:08.061909  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.061919  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.061931  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.065427  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:08.261431  117024 request.go:632] Waited for 195.392111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:08.261508  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:08.261513  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.261521  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.261525  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.264930  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:08.265453  117024 pod_ready.go:93] pod "kube-proxy-zl5bm" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:08.265474  117024 pod_ready.go:82] duration metric: took 399.656236ms for pod "kube-proxy-zl5bm" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.265485  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.461636  117024 request.go:632] Waited for 196.077133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395
	I0826 11:05:08.461727  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395
	I0826 11:05:08.461734  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.461743  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.461748  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.465553  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:08.661574  117024 request.go:632] Waited for 195.266587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:08.661661  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:05:08.661679  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.661701  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.661723  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.666146  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:08.666746  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:08.666774  117024 pod_ready.go:82] duration metric: took 401.281947ms for pod "kube-scheduler-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.666789  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:08.861810  117024 request.go:632] Waited for 194.923664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m02
	I0826 11:05:08.861893  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m02
	I0826 11:05:08.861902  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:08.861915  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:08.861920  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:08.866150  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:09.061108  117024 request.go:632] Waited for 194.349918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:09.061183  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:05:09.061190  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.061198  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.061201  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.065073  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:09.065770  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:05:09.065788  117024 pod_ready.go:82] duration metric: took 398.991846ms for pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:05:09.065799  117024 pod_ready.go:39] duration metric: took 3.200230423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:05:09.065819  117024 api_server.go:52] waiting for apiserver process to appear ...
	I0826 11:05:09.065872  117024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:05:09.081270  117024 api_server.go:72] duration metric: took 20.534056416s to wait for apiserver process to appear ...
	I0826 11:05:09.081304  117024 api_server.go:88] waiting for apiserver healthz status ...
	I0826 11:05:09.081329  117024 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0826 11:05:09.088100  117024 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0826 11:05:09.088179  117024 round_trippers.go:463] GET https://192.168.39.150:8443/version
	I0826 11:05:09.088191  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.088200  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.088206  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.089274  117024 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0826 11:05:09.089389  117024 api_server.go:141] control plane version: v1.31.0
	I0826 11:05:09.089407  117024 api_server.go:131] duration metric: took 8.095684ms to wait for apiserver health ...
	I0826 11:05:09.089415  117024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 11:05:09.261829  117024 request.go:632] Waited for 172.333523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:09.261895  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:09.261900  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.261913  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.261917  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.269367  117024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0826 11:05:09.273926  117024 system_pods.go:59] 17 kube-system pods found
	I0826 11:05:09.273962  117024 system_pods.go:61] "coredns-6f6b679f8f-l9bd4" [087dd322-a382-40bc-b631-5744d64ee6b6] Running
	I0826 11:05:09.273969  117024 system_pods.go:61] "coredns-6f6b679f8f-nxb7s" [80b1f99e-a6b9-452f-9e21-b0df08325d56] Running
	I0826 11:05:09.273972  117024 system_pods.go:61] "etcd-ha-055395" [28419734-e4da-4ec0-a7db-0094855feac2] Running
	I0826 11:05:09.273976  117024 system_pods.go:61] "etcd-ha-055395-m02" [9ce0c9b5-4072-4ea1-b326-d7b8b78b578d] Running
	I0826 11:05:09.273979  117024 system_pods.go:61] "kindnet-js2cb" [3364fb33-1685-4137-a94a-b237b8ceb9c6] Running
	I0826 11:05:09.273982  117024 system_pods.go:61] "kindnet-z2rh2" [f1df8e80-62b7-4a0a-b61a-135b907c101d] Running
	I0826 11:05:09.273985  117024 system_pods.go:61] "kube-apiserver-ha-055395" [2bd78c6d-3ad6-4064-a59b-ade12f446056] Running
	I0826 11:05:09.273991  117024 system_pods.go:61] "kube-apiserver-ha-055395-m02" [9fbaba21-92b7-46e3-8840-9422e4206f59] Running
	I0826 11:05:09.273994  117024 system_pods.go:61] "kube-controller-manager-ha-055395" [3fce2abe-e401-4c5b-8e0e-53c85390ac76] Running
	I0826 11:05:09.273996  117024 system_pods.go:61] "kube-controller-manager-ha-055395-m02" [4c9f6ebc-407a-4383-bf5f-0c91903ba213] Running
	I0826 11:05:09.273999  117024 system_pods.go:61] "kube-proxy-g45pb" [0e2dc897-60b1-4d06-a4e4-30136a39a224] Running
	I0826 11:05:09.274001  117024 system_pods.go:61] "kube-proxy-zl5bm" [bed428b3-57e8-4704-a1fd-b3db1b3e4d6c] Running
	I0826 11:05:09.274004  117024 system_pods.go:61] "kube-scheduler-ha-055395" [6ce30f64-767d-422b-8bf7-40ebc2179dcb] Running
	I0826 11:05:09.274008  117024 system_pods.go:61] "kube-scheduler-ha-055395-m02" [4d95a077-6a4d-4639-bb52-58b369107c66] Running
	I0826 11:05:09.274011  117024 system_pods.go:61] "kube-vip-ha-055395" [72a93d75-67e0-4605-81c3-f1ed830fd5eb] Running
	I0826 11:05:09.274014  117024 system_pods.go:61] "kube-vip-ha-055395-m02" [14132392-e3db-4ad5-b608-ed22e36d856b] Running
	I0826 11:05:09.274017  117024 system_pods.go:61] "storage-provisioner" [5bf3fea9-2562-4769-944b-72472da24419] Running
	I0826 11:05:09.274024  117024 system_pods.go:74] duration metric: took 184.602023ms to wait for pod list to return data ...
	I0826 11:05:09.274032  117024 default_sa.go:34] waiting for default service account to be created ...
	I0826 11:05:09.461497  117024 request.go:632] Waited for 187.376448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0826 11:05:09.461558  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0826 11:05:09.461565  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.461575  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.461583  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.465682  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:09.465913  117024 default_sa.go:45] found service account: "default"
	I0826 11:05:09.465932  117024 default_sa.go:55] duration metric: took 191.891229ms for default service account to be created ...
	I0826 11:05:09.465943  117024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 11:05:09.661105  117024 request.go:632] Waited for 195.09125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:09.661182  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:05:09.661188  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.661209  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.661216  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.665620  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:05:09.671565  117024 system_pods.go:86] 17 kube-system pods found
	I0826 11:05:09.671606  117024 system_pods.go:89] "coredns-6f6b679f8f-l9bd4" [087dd322-a382-40bc-b631-5744d64ee6b6] Running
	I0826 11:05:09.671615  117024 system_pods.go:89] "coredns-6f6b679f8f-nxb7s" [80b1f99e-a6b9-452f-9e21-b0df08325d56] Running
	I0826 11:05:09.671619  117024 system_pods.go:89] "etcd-ha-055395" [28419734-e4da-4ec0-a7db-0094855feac2] Running
	I0826 11:05:09.671624  117024 system_pods.go:89] "etcd-ha-055395-m02" [9ce0c9b5-4072-4ea1-b326-d7b8b78b578d] Running
	I0826 11:05:09.671628  117024 system_pods.go:89] "kindnet-js2cb" [3364fb33-1685-4137-a94a-b237b8ceb9c6] Running
	I0826 11:05:09.671632  117024 system_pods.go:89] "kindnet-z2rh2" [f1df8e80-62b7-4a0a-b61a-135b907c101d] Running
	I0826 11:05:09.671636  117024 system_pods.go:89] "kube-apiserver-ha-055395" [2bd78c6d-3ad6-4064-a59b-ade12f446056] Running
	I0826 11:05:09.671639  117024 system_pods.go:89] "kube-apiserver-ha-055395-m02" [9fbaba21-92b7-46e3-8840-9422e4206f59] Running
	I0826 11:05:09.671643  117024 system_pods.go:89] "kube-controller-manager-ha-055395" [3fce2abe-e401-4c5b-8e0e-53c85390ac76] Running
	I0826 11:05:09.671648  117024 system_pods.go:89] "kube-controller-manager-ha-055395-m02" [4c9f6ebc-407a-4383-bf5f-0c91903ba213] Running
	I0826 11:05:09.671652  117024 system_pods.go:89] "kube-proxy-g45pb" [0e2dc897-60b1-4d06-a4e4-30136a39a224] Running
	I0826 11:05:09.671657  117024 system_pods.go:89] "kube-proxy-zl5bm" [bed428b3-57e8-4704-a1fd-b3db1b3e4d6c] Running
	I0826 11:05:09.671661  117024 system_pods.go:89] "kube-scheduler-ha-055395" [6ce30f64-767d-422b-8bf7-40ebc2179dcb] Running
	I0826 11:05:09.671668  117024 system_pods.go:89] "kube-scheduler-ha-055395-m02" [4d95a077-6a4d-4639-bb52-58b369107c66] Running
	I0826 11:05:09.671671  117024 system_pods.go:89] "kube-vip-ha-055395" [72a93d75-67e0-4605-81c3-f1ed830fd5eb] Running
	I0826 11:05:09.671674  117024 system_pods.go:89] "kube-vip-ha-055395-m02" [14132392-e3db-4ad5-b608-ed22e36d856b] Running
	I0826 11:05:09.671678  117024 system_pods.go:89] "storage-provisioner" [5bf3fea9-2562-4769-944b-72472da24419] Running
	I0826 11:05:09.671685  117024 system_pods.go:126] duration metric: took 205.736594ms to wait for k8s-apps to be running ...
	I0826 11:05:09.671694  117024 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 11:05:09.671752  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:05:09.689100  117024 system_svc.go:56] duration metric: took 17.383966ms WaitForService to wait for kubelet
	I0826 11:05:09.689135  117024 kubeadm.go:582] duration metric: took 21.141926576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:05:09.689159  117024 node_conditions.go:102] verifying NodePressure condition ...
	I0826 11:05:09.861889  117024 request.go:632] Waited for 172.626501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes
	I0826 11:05:09.861954  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes
	I0826 11:05:09.861960  117024 round_trippers.go:469] Request Headers:
	I0826 11:05:09.861973  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:05:09.861980  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:05:09.865779  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:05:09.866767  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:05:09.866794  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:05:09.866806  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:05:09.866809  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:05:09.866813  117024 node_conditions.go:105] duration metric: took 177.648393ms to run NodePressure ...
	I0826 11:05:09.866827  117024 start.go:241] waiting for startup goroutines ...
	I0826 11:05:09.866865  117024 start.go:255] writing updated cluster config ...
	I0826 11:05:09.869315  117024 out.go:201] 
	I0826 11:05:09.871104  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:05:09.871207  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:05:09.872908  117024 out.go:177] * Starting "ha-055395-m03" control-plane node in "ha-055395" cluster
	I0826 11:05:09.874141  117024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:05:09.874169  117024 cache.go:56] Caching tarball of preloaded images
	I0826 11:05:09.874292  117024 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:05:09.874308  117024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:05:09.874398  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:05:09.874604  117024 start.go:360] acquireMachinesLock for ha-055395-m03: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:05:09.874657  117024 start.go:364] duration metric: took 31.281µs to acquireMachinesLock for "ha-055395-m03"
	I0826 11:05:09.874684  117024 start.go:93] Provisioning new machine with config: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:05:09.874790  117024 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0826 11:05:09.876597  117024 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 11:05:09.876696  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:05:09.876739  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:05:09.894431  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39947
	I0826 11:05:09.895003  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:05:09.895611  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:05:09.895635  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:05:09.895980  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:05:09.896192  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetMachineName
	I0826 11:05:09.896372  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:09.896568  117024 start.go:159] libmachine.API.Create for "ha-055395" (driver="kvm2")
	I0826 11:05:09.896607  117024 client.go:168] LocalClient.Create starting
	I0826 11:05:09.896645  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 11:05:09.896691  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:05:09.896718  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:05:09.896795  117024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 11:05:09.896842  117024 main.go:141] libmachine: Decoding PEM data...
	I0826 11:05:09.896854  117024 main.go:141] libmachine: Parsing certificate...
	I0826 11:05:09.896873  117024 main.go:141] libmachine: Running pre-create checks...
	I0826 11:05:09.896881  117024 main.go:141] libmachine: (ha-055395-m03) Calling .PreCreateCheck
	I0826 11:05:09.897088  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetConfigRaw
	I0826 11:05:09.897544  117024 main.go:141] libmachine: Creating machine...
	I0826 11:05:09.897560  117024 main.go:141] libmachine: (ha-055395-m03) Calling .Create
	I0826 11:05:09.897707  117024 main.go:141] libmachine: (ha-055395-m03) Creating KVM machine...
	I0826 11:05:09.899194  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found existing default KVM network
	I0826 11:05:09.899385  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found existing private KVM network mk-ha-055395
	I0826 11:05:09.899621  117024 main.go:141] libmachine: (ha-055395-m03) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03 ...
	I0826 11:05:09.899645  117024 main.go:141] libmachine: (ha-055395-m03) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 11:05:09.899762  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:09.899614  117790 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:05:09.899860  117024 main.go:141] libmachine: (ha-055395-m03) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 11:05:10.156303  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:10.156140  117790 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa...
	I0826 11:05:10.428332  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:10.428217  117790 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/ha-055395-m03.rawdisk...
	I0826 11:05:10.428366  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Writing magic tar header
	I0826 11:05:10.428381  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Writing SSH key tar header
	I0826 11:05:10.428400  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:10.428339  117790 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03 ...
	I0826 11:05:10.428518  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03
	I0826 11:05:10.428548  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03 (perms=drwx------)
	I0826 11:05:10.428559  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 11:05:10.428572  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:05:10.428581  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 11:05:10.428596  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 11:05:10.428608  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 11:05:10.428621  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 11:05:10.428632  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 11:05:10.428648  117024 main.go:141] libmachine: (ha-055395-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 11:05:10.428660  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 11:05:10.428673  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home/jenkins
	I0826 11:05:10.428684  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Checking permissions on dir: /home
	I0826 11:05:10.428695  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Skipping /home - not owner
	I0826 11:05:10.428734  117024 main.go:141] libmachine: (ha-055395-m03) Creating domain...
	I0826 11:05:10.429624  117024 main.go:141] libmachine: (ha-055395-m03) define libvirt domain using xml: 
	I0826 11:05:10.429647  117024 main.go:141] libmachine: (ha-055395-m03) <domain type='kvm'>
	I0826 11:05:10.429656  117024 main.go:141] libmachine: (ha-055395-m03)   <name>ha-055395-m03</name>
	I0826 11:05:10.429663  117024 main.go:141] libmachine: (ha-055395-m03)   <memory unit='MiB'>2200</memory>
	I0826 11:05:10.429672  117024 main.go:141] libmachine: (ha-055395-m03)   <vcpu>2</vcpu>
	I0826 11:05:10.429680  117024 main.go:141] libmachine: (ha-055395-m03)   <features>
	I0826 11:05:10.429693  117024 main.go:141] libmachine: (ha-055395-m03)     <acpi/>
	I0826 11:05:10.429700  117024 main.go:141] libmachine: (ha-055395-m03)     <apic/>
	I0826 11:05:10.429720  117024 main.go:141] libmachine: (ha-055395-m03)     <pae/>
	I0826 11:05:10.429728  117024 main.go:141] libmachine: (ha-055395-m03)     
	I0826 11:05:10.429734  117024 main.go:141] libmachine: (ha-055395-m03)   </features>
	I0826 11:05:10.429738  117024 main.go:141] libmachine: (ha-055395-m03)   <cpu mode='host-passthrough'>
	I0826 11:05:10.429743  117024 main.go:141] libmachine: (ha-055395-m03)   
	I0826 11:05:10.429749  117024 main.go:141] libmachine: (ha-055395-m03)   </cpu>
	I0826 11:05:10.429754  117024 main.go:141] libmachine: (ha-055395-m03)   <os>
	I0826 11:05:10.429759  117024 main.go:141] libmachine: (ha-055395-m03)     <type>hvm</type>
	I0826 11:05:10.429767  117024 main.go:141] libmachine: (ha-055395-m03)     <boot dev='cdrom'/>
	I0826 11:05:10.429784  117024 main.go:141] libmachine: (ha-055395-m03)     <boot dev='hd'/>
	I0826 11:05:10.429794  117024 main.go:141] libmachine: (ha-055395-m03)     <bootmenu enable='no'/>
	I0826 11:05:10.429806  117024 main.go:141] libmachine: (ha-055395-m03)   </os>
	I0826 11:05:10.429814  117024 main.go:141] libmachine: (ha-055395-m03)   <devices>
	I0826 11:05:10.429821  117024 main.go:141] libmachine: (ha-055395-m03)     <disk type='file' device='cdrom'>
	I0826 11:05:10.429833  117024 main.go:141] libmachine: (ha-055395-m03)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/boot2docker.iso'/>
	I0826 11:05:10.429843  117024 main.go:141] libmachine: (ha-055395-m03)       <target dev='hdc' bus='scsi'/>
	I0826 11:05:10.429849  117024 main.go:141] libmachine: (ha-055395-m03)       <readonly/>
	I0826 11:05:10.429857  117024 main.go:141] libmachine: (ha-055395-m03)     </disk>
	I0826 11:05:10.429870  117024 main.go:141] libmachine: (ha-055395-m03)     <disk type='file' device='disk'>
	I0826 11:05:10.429882  117024 main.go:141] libmachine: (ha-055395-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 11:05:10.429893  117024 main.go:141] libmachine: (ha-055395-m03)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/ha-055395-m03.rawdisk'/>
	I0826 11:05:10.429908  117024 main.go:141] libmachine: (ha-055395-m03)       <target dev='hda' bus='virtio'/>
	I0826 11:05:10.429920  117024 main.go:141] libmachine: (ha-055395-m03)     </disk>
	I0826 11:05:10.429928  117024 main.go:141] libmachine: (ha-055395-m03)     <interface type='network'>
	I0826 11:05:10.429937  117024 main.go:141] libmachine: (ha-055395-m03)       <source network='mk-ha-055395'/>
	I0826 11:05:10.429947  117024 main.go:141] libmachine: (ha-055395-m03)       <model type='virtio'/>
	I0826 11:05:10.429955  117024 main.go:141] libmachine: (ha-055395-m03)     </interface>
	I0826 11:05:10.429965  117024 main.go:141] libmachine: (ha-055395-m03)     <interface type='network'>
	I0826 11:05:10.429972  117024 main.go:141] libmachine: (ha-055395-m03)       <source network='default'/>
	I0826 11:05:10.429979  117024 main.go:141] libmachine: (ha-055395-m03)       <model type='virtio'/>
	I0826 11:05:10.429986  117024 main.go:141] libmachine: (ha-055395-m03)     </interface>
	I0826 11:05:10.429999  117024 main.go:141] libmachine: (ha-055395-m03)     <serial type='pty'>
	I0826 11:05:10.430042  117024 main.go:141] libmachine: (ha-055395-m03)       <target port='0'/>
	I0826 11:05:10.430067  117024 main.go:141] libmachine: (ha-055395-m03)     </serial>
	I0826 11:05:10.430077  117024 main.go:141] libmachine: (ha-055395-m03)     <console type='pty'>
	I0826 11:05:10.430092  117024 main.go:141] libmachine: (ha-055395-m03)       <target type='serial' port='0'/>
	I0826 11:05:10.430103  117024 main.go:141] libmachine: (ha-055395-m03)     </console>
	I0826 11:05:10.430111  117024 main.go:141] libmachine: (ha-055395-m03)     <rng model='virtio'>
	I0826 11:05:10.430123  117024 main.go:141] libmachine: (ha-055395-m03)       <backend model='random'>/dev/random</backend>
	I0826 11:05:10.430133  117024 main.go:141] libmachine: (ha-055395-m03)     </rng>
	I0826 11:05:10.430142  117024 main.go:141] libmachine: (ha-055395-m03)     
	I0826 11:05:10.430151  117024 main.go:141] libmachine: (ha-055395-m03)     
	I0826 11:05:10.430159  117024 main.go:141] libmachine: (ha-055395-m03)   </devices>
	I0826 11:05:10.430173  117024 main.go:141] libmachine: (ha-055395-m03) </domain>
	I0826 11:05:10.430183  117024 main.go:141] libmachine: (ha-055395-m03) 
	I0826 11:05:10.437631  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:af:f5:37 in network default
	I0826 11:05:10.438408  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:10.438437  117024 main.go:141] libmachine: (ha-055395-m03) Ensuring networks are active...
	I0826 11:05:10.439282  117024 main.go:141] libmachine: (ha-055395-m03) Ensuring network default is active
	I0826 11:05:10.439697  117024 main.go:141] libmachine: (ha-055395-m03) Ensuring network mk-ha-055395 is active
	I0826 11:05:10.440082  117024 main.go:141] libmachine: (ha-055395-m03) Getting domain xml...
	I0826 11:05:10.440757  117024 main.go:141] libmachine: (ha-055395-m03) Creating domain...
	I0826 11:05:11.695519  117024 main.go:141] libmachine: (ha-055395-m03) Waiting to get IP...
	I0826 11:05:11.696382  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:11.696893  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:11.696927  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:11.696862  117790 retry.go:31] will retry after 237.697037ms: waiting for machine to come up
	I0826 11:05:11.936330  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:11.936843  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:11.936875  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:11.936805  117790 retry.go:31] will retry after 256.411063ms: waiting for machine to come up
	I0826 11:05:12.195253  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:12.195710  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:12.195735  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:12.195662  117790 retry.go:31] will retry after 410.928155ms: waiting for machine to come up
	I0826 11:05:12.608313  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:12.608816  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:12.608849  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:12.608750  117790 retry.go:31] will retry after 450.604024ms: waiting for machine to come up
	I0826 11:05:13.061050  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:13.061544  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:13.061583  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:13.061484  117790 retry.go:31] will retry after 526.801583ms: waiting for machine to come up
	I0826 11:05:13.590087  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:13.590593  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:13.590620  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:13.590552  117790 retry.go:31] will retry after 849.29226ms: waiting for machine to come up
	I0826 11:05:14.441473  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:14.441829  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:14.441859  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:14.441776  117790 retry.go:31] will retry after 1.189728783s: waiting for machine to come up
	I0826 11:05:15.633195  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:15.633639  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:15.633669  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:15.633588  117790 retry.go:31] will retry after 1.199187401s: waiting for machine to come up
	I0826 11:05:16.835147  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:16.835662  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:16.835704  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:16.835620  117790 retry.go:31] will retry after 1.739710221s: waiting for machine to come up
	I0826 11:05:18.576454  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:18.576874  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:18.576897  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:18.576826  117790 retry.go:31] will retry after 2.199446152s: waiting for machine to come up
	I0826 11:05:20.778273  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:20.778823  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:20.778875  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:20.778757  117790 retry.go:31] will retry after 2.636484153s: waiting for machine to come up
	I0826 11:05:23.416998  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:23.417588  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:23.417611  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:23.417518  117790 retry.go:31] will retry after 3.455957799s: waiting for machine to come up
	I0826 11:05:26.876008  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:26.876560  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find current IP address of domain ha-055395-m03 in network mk-ha-055395
	I0826 11:05:26.876586  117024 main.go:141] libmachine: (ha-055395-m03) DBG | I0826 11:05:26.876513  117790 retry.go:31] will retry after 4.202229574s: waiting for machine to come up
	I0826 11:05:31.080465  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.081003  117024 main.go:141] libmachine: (ha-055395-m03) Found IP for machine: 192.168.39.209
	I0826 11:05:31.081029  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has current primary IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.081037  117024 main.go:141] libmachine: (ha-055395-m03) Reserving static IP address...
	I0826 11:05:31.081461  117024 main.go:141] libmachine: (ha-055395-m03) DBG | unable to find host DHCP lease matching {name: "ha-055395-m03", mac: "52:54:00:66:85:18", ip: "192.168.39.209"} in network mk-ha-055395
	I0826 11:05:31.166774  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Getting to WaitForSSH function...
	I0826 11:05:31.166804  117024 main.go:141] libmachine: (ha-055395-m03) Reserved static IP address: 192.168.39.209
	I0826 11:05:31.166821  117024 main.go:141] libmachine: (ha-055395-m03) Waiting for SSH to be available...
	I0826 11:05:31.170060  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.170532  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.170562  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.170722  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Using SSH client type: external
	I0826 11:05:31.170753  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa (-rw-------)
	I0826 11:05:31.170787  117024 main.go:141] libmachine: (ha-055395-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:05:31.170806  117024 main.go:141] libmachine: (ha-055395-m03) DBG | About to run SSH command:
	I0826 11:05:31.170821  117024 main.go:141] libmachine: (ha-055395-m03) DBG | exit 0
	I0826 11:05:31.299210  117024 main.go:141] libmachine: (ha-055395-m03) DBG | SSH cmd err, output: <nil>: 
	I0826 11:05:31.299491  117024 main.go:141] libmachine: (ha-055395-m03) KVM machine creation complete!
	I0826 11:05:31.299798  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetConfigRaw
	I0826 11:05:31.300673  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:31.300901  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:31.301145  117024 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 11:05:31.301162  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:05:31.302529  117024 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 11:05:31.302542  117024 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 11:05:31.302548  117024 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 11:05:31.302554  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.304944  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.305403  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.305439  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.305607  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:31.305821  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.306032  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.306190  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:31.306379  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:31.306653  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:31.306670  117024 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 11:05:31.418170  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:05:31.418208  117024 main.go:141] libmachine: Detecting the provisioner...
	I0826 11:05:31.418219  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.421287  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.421743  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.421770  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.422108  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:31.422320  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.422524  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.422622  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:31.422860  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:31.423114  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:31.423131  117024 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 11:05:31.539362  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 11:05:31.539439  117024 main.go:141] libmachine: found compatible host: buildroot
	I0826 11:05:31.539453  117024 main.go:141] libmachine: Provisioning with buildroot...
	I0826 11:05:31.539466  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetMachineName
	I0826 11:05:31.539715  117024 buildroot.go:166] provisioning hostname "ha-055395-m03"
	I0826 11:05:31.539744  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetMachineName
	I0826 11:05:31.539963  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.542762  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.543219  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.543248  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.543412  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:31.543603  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.543797  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.543921  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:31.544113  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:31.544284  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:31.544295  117024 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-055395-m03 && echo "ha-055395-m03" | sudo tee /etc/hostname
	I0826 11:05:31.673405  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395-m03
	
	I0826 11:05:31.673443  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.676254  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.676636  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.676665  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.676869  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:31.677060  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.677174  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:31.677269  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:31.677477  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:31.677705  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:31.677729  117024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-055395-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-055395-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-055395-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:05:31.803218  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:05:31.803255  117024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:05:31.803279  117024 buildroot.go:174] setting up certificates
	I0826 11:05:31.803293  117024 provision.go:84] configureAuth start
	I0826 11:05:31.803307  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetMachineName
	I0826 11:05:31.803594  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:05:31.806568  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.807033  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.807081  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.807234  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:31.809692  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.810167  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:31.810199  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:31.810447  117024 provision.go:143] copyHostCerts
	I0826 11:05:31.810481  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:05:31.810515  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:05:31.810531  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:05:31.810595  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:05:31.810684  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:05:31.810700  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:05:31.810708  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:05:31.810730  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:05:31.810782  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:05:31.810801  117024 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:05:31.810805  117024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:05:31.810826  117024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:05:31.810923  117024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.ha-055395-m03 san=[127.0.0.1 192.168.39.209 ha-055395-m03 localhost minikube]
	I0826 11:05:32.024003  117024 provision.go:177] copyRemoteCerts
	I0826 11:05:32.024067  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:05:32.024092  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.027083  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.027444  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.027476  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.027719  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.027959  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.028159  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.028298  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:05:32.115106  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:05:32.115186  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:05:32.141709  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:05:32.141798  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:05:32.168738  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:05:32.168829  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0826 11:05:32.195050  117024 provision.go:87] duration metric: took 391.740494ms to configureAuth
	I0826 11:05:32.195084  117024 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:05:32.195329  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:05:32.195425  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.198753  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.199161  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.199192  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.199445  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.199738  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.199950  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.200106  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.200319  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:32.200499  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:32.200520  117024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:05:32.477056  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:05:32.477107  117024 main.go:141] libmachine: Checking connection to Docker...
	I0826 11:05:32.477119  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetURL
	I0826 11:05:32.478455  117024 main.go:141] libmachine: (ha-055395-m03) DBG | Using libvirt version 6000000
	I0826 11:05:32.480827  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.481167  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.481206  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.481390  117024 main.go:141] libmachine: Docker is up and running!
	I0826 11:05:32.481405  117024 main.go:141] libmachine: Reticulating splines...
	I0826 11:05:32.481412  117024 client.go:171] duration metric: took 22.584796254s to LocalClient.Create
	I0826 11:05:32.481434  117024 start.go:167] duration metric: took 22.584868827s to libmachine.API.Create "ha-055395"
	I0826 11:05:32.481447  117024 start.go:293] postStartSetup for "ha-055395-m03" (driver="kvm2")
	I0826 11:05:32.481465  117024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:05:32.481482  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.481717  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:05:32.481750  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.483864  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.484149  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.484173  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.484353  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.484506  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.484696  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.484848  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:05:32.574510  117024 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:05:32.578537  117024 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:05:32.578622  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:05:32.578708  117024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:05:32.578807  117024 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:05:32.578819  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:05:32.578969  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:05:32.588741  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:05:32.613612  117024 start.go:296] duration metric: took 132.146042ms for postStartSetup
	I0826 11:05:32.613670  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetConfigRaw
	I0826 11:05:32.614355  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:05:32.617168  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.617555  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.617599  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.617883  117024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:05:32.618131  117024 start.go:128] duration metric: took 22.743325947s to createHost
	I0826 11:05:32.618160  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.620518  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.620827  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.620853  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.621046  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.621303  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.621476  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.621603  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.621759  117024 main.go:141] libmachine: Using SSH client type: native
	I0826 11:05:32.622000  117024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0826 11:05:32.622011  117024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:05:32.735826  117024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670332.710972782
	
	I0826 11:05:32.735850  117024 fix.go:216] guest clock: 1724670332.710972782
	I0826 11:05:32.735857  117024 fix.go:229] Guest: 2024-08-26 11:05:32.710972782 +0000 UTC Remote: 2024-08-26 11:05:32.618147148 +0000 UTC m=+142.821673052 (delta=92.825634ms)
	I0826 11:05:32.735876  117024 fix.go:200] guest clock delta is within tolerance: 92.825634ms
	I0826 11:05:32.735883  117024 start.go:83] releasing machines lock for "ha-055395-m03", held for 22.861213322s
	I0826 11:05:32.735903  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.736171  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:05:32.738728  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.739235  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.739265  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.741921  117024 out.go:177] * Found network options:
	I0826 11:05:32.743431  117024 out.go:177]   - NO_PROXY=192.168.39.150,192.168.39.55
	W0826 11:05:32.744862  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	W0826 11:05:32.744896  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0826 11:05:32.744918  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.745727  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.746039  117024 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:05:32.746178  117024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:05:32.746228  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	W0826 11:05:32.746279  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	W0826 11:05:32.746304  117024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0826 11:05:32.746379  117024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:05:32.746404  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:05:32.749366  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.749396  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.749791  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.749839  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:32.749868  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.749924  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:32.750117  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.750205  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:05:32.750307  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.750383  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:05:32.750447  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.750501  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:05:32.750560  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:05:32.750740  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:05:32.985275  117024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:05:32.991074  117024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:05:32.991147  117024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:05:33.008497  117024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:05:33.008543  117024 start.go:495] detecting cgroup driver to use...
	I0826 11:05:33.008624  117024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:05:33.024905  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:05:33.039390  117024 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:05:33.039463  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:05:33.053838  117024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:05:33.069329  117024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:05:33.183597  117024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:05:33.332337  117024 docker.go:233] disabling docker service ...
	I0826 11:05:33.332404  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:05:33.348908  117024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:05:33.362319  117024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:05:33.523528  117024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:05:33.640144  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:05:33.654456  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:05:33.672799  117024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:05:33.672862  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.683357  117024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:05:33.683444  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.693488  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.703741  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.715187  117024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:05:33.726366  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.736814  117024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.755067  117024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:05:33.765140  117024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:05:33.773974  117024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:05:33.774037  117024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:05:33.788271  117024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:05:33.798628  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:05:33.916852  117024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:05:34.055809  117024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:05:34.055894  117024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:05:34.060534  117024 start.go:563] Will wait 60s for crictl version
	I0826 11:05:34.060630  117024 ssh_runner.go:195] Run: which crictl
	I0826 11:05:34.065113  117024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:05:34.112089  117024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:05:34.112197  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:05:34.141440  117024 ssh_runner.go:195] Run: crio --version
	I0826 11:05:34.172725  117024 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:05:34.174111  117024 out.go:177]   - env NO_PROXY=192.168.39.150
	I0826 11:05:34.175759  117024 out.go:177]   - env NO_PROXY=192.168.39.150,192.168.39.55
	I0826 11:05:34.177146  117024 main.go:141] libmachine: (ha-055395-m03) Calling .GetIP
	I0826 11:05:34.180269  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:34.180633  117024 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:05:34.180659  117024 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:05:34.180902  117024 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:05:34.185305  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:05:34.199422  117024 mustload.go:65] Loading cluster: ha-055395
	I0826 11:05:34.199654  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:05:34.199969  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:05:34.200013  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:05:34.215420  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I0826 11:05:34.215882  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:05:34.216354  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:05:34.216373  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:05:34.216745  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:05:34.216992  117024 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:05:34.218814  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:05:34.219196  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:05:34.219237  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:05:34.235151  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0826 11:05:34.235583  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:05:34.236080  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:05:34.236106  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:05:34.236513  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:05:34.236720  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:05:34.236886  117024 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395 for IP: 192.168.39.209
	I0826 11:05:34.236897  117024 certs.go:194] generating shared ca certs ...
	I0826 11:05:34.236912  117024 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:05:34.237039  117024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:05:34.237074  117024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:05:34.237082  117024 certs.go:256] generating profile certs ...
	I0826 11:05:34.237147  117024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key
	I0826 11:05:34.237169  117024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.7a1bfba6
	I0826 11:05:34.237187  117024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.7a1bfba6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.55 192.168.39.209 192.168.39.254]
	I0826 11:05:34.313323  117024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.7a1bfba6 ...
	I0826 11:05:34.313359  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.7a1bfba6: {Name:mk2be64c493d0f3fd7053f7cbe68fe5aba7b8425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:05:34.313533  117024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.7a1bfba6 ...
	I0826 11:05:34.313546  117024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.7a1bfba6: {Name:mkfe2613899429ae81d12c212dcf29a172aaaeaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:05:34.313619  117024 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.7a1bfba6 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt
	I0826 11:05:34.313750  117024 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.7a1bfba6 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key
	I0826 11:05:34.313877  117024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key
	I0826 11:05:34.313893  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:05:34.313906  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:05:34.313919  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:05:34.313932  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:05:34.313944  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:05:34.313955  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:05:34.313967  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:05:34.313978  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:05:34.314030  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:05:34.314056  117024 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:05:34.314065  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:05:34.314085  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:05:34.314105  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:05:34.314127  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:05:34.314165  117024 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:05:34.314189  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:05:34.314202  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:05:34.314214  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:05:34.314247  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:05:34.317454  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:05:34.317952  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:05:34.317989  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:05:34.318174  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:05:34.318385  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:05:34.318631  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:05:34.318816  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:05:34.391327  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0826 11:05:34.397068  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0826 11:05:34.409713  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0826 11:05:34.414388  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0826 11:05:34.425537  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0826 11:05:34.429552  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0826 11:05:34.440715  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0826 11:05:34.445267  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0826 11:05:34.456636  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0826 11:05:34.461124  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0826 11:05:34.472765  117024 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0826 11:05:34.477157  117024 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0826 11:05:34.488224  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:05:34.513163  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:05:34.537621  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:05:34.563079  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:05:34.587778  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0826 11:05:34.612232  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 11:05:34.636366  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:05:34.661605  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:05:34.686530  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:05:34.711512  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:05:34.737635  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:05:34.761710  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0826 11:05:34.779591  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0826 11:05:34.797498  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0826 11:05:34.814490  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0826 11:05:34.831393  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0826 11:05:34.848281  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0826 11:05:34.865337  117024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0826 11:05:34.882381  117024 ssh_runner.go:195] Run: openssl version
	I0826 11:05:34.888002  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:05:34.899074  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:05:34.904128  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:05:34.904238  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:05:34.909727  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:05:34.920094  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:05:34.930409  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:05:34.934934  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:05:34.934990  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:05:34.940830  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:05:34.952681  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:05:34.965758  117024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:05:34.970440  117024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:05:34.970496  117024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:05:34.976185  117024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:05:34.989290  117024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:05:34.993982  117024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 11:05:34.994063  117024 kubeadm.go:934] updating node {m03 192.168.39.209 8443 v1.31.0 crio true true} ...
	I0826 11:05:34.994152  117024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-055395-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:05:34.994177  117024 kube-vip.go:115] generating kube-vip config ...
	I0826 11:05:34.994222  117024 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0826 11:05:35.010372  117024 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0826 11:05:35.010476  117024 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0826 11:05:35.010556  117024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:05:35.020648  117024 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0826 11:05:35.020797  117024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0826 11:05:35.031858  117024 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0826 11:05:35.031859  117024 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0826 11:05:35.031897  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0826 11:05:35.031896  117024 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0826 11:05:35.031913  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0826 11:05:35.031943  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:05:35.031966  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0826 11:05:35.031971  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0826 11:05:35.041418  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0826 11:05:35.041453  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0826 11:05:35.056955  117024 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0826 11:05:35.056980  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0826 11:05:35.057019  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0826 11:05:35.057062  117024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0826 11:05:35.107605  117024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0826 11:05:35.107665  117024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0826 11:05:35.934172  117024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0826 11:05:35.944020  117024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0826 11:05:35.960999  117024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:05:35.978215  117024 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0826 11:05:35.996039  117024 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0826 11:05:36.000425  117024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:05:36.013711  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:05:36.146677  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:05:36.166818  117024 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:05:36.167336  117024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:05:36.167392  117024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:05:36.184634  117024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I0826 11:05:36.185060  117024 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:05:36.185590  117024 main.go:141] libmachine: Using API Version  1
	I0826 11:05:36.185610  117024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:05:36.185954  117024 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:05:36.186174  117024 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:05:36.186335  117024 start.go:317] joinCluster: &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:05:36.186467  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0826 11:05:36.186482  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:05:36.189192  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:05:36.189657  117024 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:05:36.189691  117024 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:05:36.189895  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:05:36.190073  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:05:36.190274  117024 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:05:36.190439  117024 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:05:36.347817  117024 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:05:36.347886  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lm9l3u.n05vhvc2b02519dh --discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-055395-m03 --control-plane --apiserver-advertise-address=192.168.39.209 --apiserver-bind-port=8443"
	I0826 11:05:59.051708  117024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lm9l3u.n05vhvc2b02519dh --discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-055395-m03 --control-plane --apiserver-advertise-address=192.168.39.209 --apiserver-bind-port=8443": (22.703790459s)
	I0826 11:05:59.051757  117024 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0826 11:05:59.640986  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-055395-m03 minikube.k8s.io/updated_at=2024_08_26T11_05_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=ha-055395 minikube.k8s.io/primary=false
	I0826 11:05:59.765186  117024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-055395-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0826 11:05:59.896625  117024 start.go:319] duration metric: took 23.710285157s to joinCluster
	I0826 11:05:59.896731  117024 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:05:59.897065  117024 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:05:59.898663  117024 out.go:177] * Verifying Kubernetes components...
	I0826 11:05:59.900463  117024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:06:00.184359  117024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:06:00.235461  117024 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:06:00.235832  117024 kapi.go:59] client config for ha-055395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key", CAFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0826 11:06:00.235932  117024 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.150:8443
	I0826 11:06:00.236244  117024 node_ready.go:35] waiting up to 6m0s for node "ha-055395-m03" to be "Ready" ...
	I0826 11:06:00.236339  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:00.236351  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:00.236362  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:00.236368  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:00.240278  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:00.736674  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:00.736703  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:00.736714  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:00.736719  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:00.740533  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:01.236867  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:01.236902  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:01.236913  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:01.236926  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:01.240745  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:01.736794  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:01.736818  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:01.736829  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:01.736833  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:01.740681  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:02.237262  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:02.237290  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:02.237298  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:02.237302  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:02.240458  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:02.240927  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:02.737107  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:02.737131  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:02.737140  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:02.737144  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:02.740759  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:03.237128  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:03.237155  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:03.237165  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:03.237169  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:03.240476  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:03.736450  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:03.736499  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:03.736511  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:03.736516  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:03.740617  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:04.237300  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:04.237326  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:04.237333  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:04.237337  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:04.240827  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:04.241583  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:04.737453  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:04.737482  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:04.737495  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:04.737503  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:04.740868  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:05.236500  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:05.236521  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:05.236530  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:05.236536  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:05.239881  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:05.737338  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:05.737363  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:05.737377  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:05.737382  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:05.740764  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:06.237354  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:06.237387  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:06.237401  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:06.237408  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:06.242710  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:06.243468  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:06.736774  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:06.736797  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:06.736806  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:06.736817  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:06.741224  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:07.236635  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:07.236671  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:07.236680  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:07.236685  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:07.240380  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:07.737503  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:07.737530  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:07.737539  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:07.737543  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:07.741193  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:08.237059  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:08.237082  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:08.237091  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:08.237095  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:08.240517  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:08.737441  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:08.737471  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:08.737481  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:08.737490  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:08.741670  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:08.742338  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:09.237100  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:09.237123  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:09.237131  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:09.237135  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:09.240486  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:09.737006  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:09.737038  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:09.737048  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:09.737055  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:09.740611  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:10.237065  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:10.237093  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:10.237104  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:10.237112  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:10.239977  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:10.736440  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:10.736464  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:10.736472  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:10.736476  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:10.740034  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:11.236461  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:11.236483  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:11.236492  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:11.236497  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:11.240157  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:11.240690  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:11.737095  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:11.737118  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:11.737126  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:11.737130  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:11.740781  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:12.237547  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:12.237574  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:12.237582  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:12.237586  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:12.241584  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:12.736582  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:12.736612  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:12.736622  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:12.736626  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:12.740044  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:13.236955  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:13.236984  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:13.236993  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:13.236997  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:13.240548  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:13.241222  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:13.736491  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:13.736516  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:13.736525  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:13.736530  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:13.739943  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:14.237178  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:14.237201  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:14.237210  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:14.237214  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:14.241129  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:14.736612  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:14.736642  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:14.736660  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:14.736667  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:14.740125  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:15.237206  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:15.237233  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:15.237245  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:15.237250  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:15.240870  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:15.241455  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:15.737334  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:15.737362  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:15.737370  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:15.737375  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:15.741177  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:16.236987  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:16.237012  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:16.237020  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:16.237024  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:16.240991  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:16.736852  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:16.736880  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:16.736888  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:16.736891  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:16.741118  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:17.236578  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:17.236605  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:17.236613  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:17.236616  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:17.240086  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:17.736956  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:17.736978  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:17.736987  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:17.736991  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:17.740431  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:17.741256  117024 node_ready.go:53] node "ha-055395-m03" has status "Ready":"False"
	I0826 11:06:18.236564  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:18.236592  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.236601  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.236605  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.240062  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.240555  117024 node_ready.go:49] node "ha-055395-m03" has status "Ready":"True"
	I0826 11:06:18.240576  117024 node_ready.go:38] duration metric: took 18.004312905s for node "ha-055395-m03" to be "Ready" ...
	I0826 11:06:18.240586  117024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:06:18.240662  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:18.240672  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.240680  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.240685  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.247667  117024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0826 11:06:18.255049  117024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.255144  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-l9bd4
	I0826 11:06:18.255152  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.255160  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.255163  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.258174  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:18.258933  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:18.258956  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.258967  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.258975  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.261839  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:18.262337  117024 pod_ready.go:93] pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.262360  117024 pod_ready.go:82] duration metric: took 7.280488ms for pod "coredns-6f6b679f8f-l9bd4" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.262374  117024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.262448  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nxb7s
	I0826 11:06:18.262459  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.262469  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.262475  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.268156  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:18.268916  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:18.268934  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.268941  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.268946  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.272031  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.272672  117024 pod_ready.go:93] pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.272696  117024 pod_ready.go:82] duration metric: took 10.313624ms for pod "coredns-6f6b679f8f-nxb7s" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.272709  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.272790  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395
	I0826 11:06:18.272802  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.272820  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.272829  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.275976  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.276783  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:18.276798  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.276806  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.276811  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.279604  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:18.280422  117024 pod_ready.go:93] pod "etcd-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.280458  117024 pod_ready.go:82] duration metric: took 7.740578ms for pod "etcd-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.280474  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.280562  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395-m02
	I0826 11:06:18.280575  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.280588  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.280596  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.283900  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.284722  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:18.284736  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.284743  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.284747  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.287513  117024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0826 11:06:18.288091  117024 pod_ready.go:93] pod "etcd-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.288113  117024 pod_ready.go:82] duration metric: took 7.631105ms for pod "etcd-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.288123  117024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.437524  117024 request.go:632] Waited for 149.331839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395-m03
	I0826 11:06:18.437606  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-055395-m03
	I0826 11:06:18.437626  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.437635  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.437640  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.441585  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:18.636690  117024 request.go:632] Waited for 194.348676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:18.636773  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:18.636780  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.636791  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.636801  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.641895  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:18.642471  117024 pod_ready.go:93] pod "etcd-ha-055395-m03" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:18.642495  117024 pod_ready.go:82] duration metric: took 354.363726ms for pod "etcd-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.642518  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:18.836640  117024 request.go:632] Waited for 194.005829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395
	I0826 11:06:18.836727  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395
	I0826 11:06:18.836734  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:18.836746  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:18.836753  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:18.840987  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:19.037052  117024 request.go:632] Waited for 195.381707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:19.037122  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:19.037128  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.037135  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.037139  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.041035  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.041810  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:19.041848  117024 pod_ready.go:82] duration metric: took 399.304359ms for pod "kube-apiserver-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.041862  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.237476  117024 request.go:632] Waited for 195.524757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m02
	I0826 11:06:19.237541  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m02
	I0826 11:06:19.237546  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.237567  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.237571  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.241226  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.436657  117024 request.go:632] Waited for 194.288015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:19.436724  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:19.436729  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.436737  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.436742  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.440727  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.441435  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:19.441460  117024 pod_ready.go:82] duration metric: took 399.591361ms for pod "kube-apiserver-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.441478  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.637523  117024 request.go:632] Waited for 195.952664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m03
	I0826 11:06:19.637615  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-055395-m03
	I0826 11:06:19.637622  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.637630  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.637635  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.641332  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.836812  117024 request.go:632] Waited for 194.542228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:19.836894  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:19.836899  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:19.836909  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:19.836914  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:19.840756  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:19.841371  117024 pod_ready.go:93] pod "kube-apiserver-ha-055395-m03" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:19.841396  117024 pod_ready.go:82] duration metric: took 399.909275ms for pod "kube-apiserver-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:19.841410  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.037378  117024 request.go:632] Waited for 195.879685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395
	I0826 11:06:20.037449  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395
	I0826 11:06:20.037455  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.037464  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.037468  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.041198  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:20.237157  117024 request.go:632] Waited for 195.361607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:20.237226  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:20.237232  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.237239  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.237243  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.240423  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:20.241263  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:20.241281  117024 pod_ready.go:82] duration metric: took 399.863521ms for pod "kube-controller-manager-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.241291  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.437154  117024 request.go:632] Waited for 195.764082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m02
	I0826 11:06:20.437232  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m02
	I0826 11:06:20.437240  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.437251  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.437257  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.441193  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:20.637545  117024 request.go:632] Waited for 195.425179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:20.637623  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:20.637629  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.637638  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.637643  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.641398  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:20.642370  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:20.642390  117024 pod_ready.go:82] duration metric: took 401.093186ms for pod "kube-controller-manager-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.642400  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:20.837552  117024 request.go:632] Waited for 195.047341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m03
	I0826 11:06:20.837636  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-055395-m03
	I0826 11:06:20.837644  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:20.837656  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:20.837669  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:20.841552  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.036616  117024 request.go:632] Waited for 194.305096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:21.036711  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:21.036716  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.036725  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.036730  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.040195  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.040698  117024 pod_ready.go:93] pod "kube-controller-manager-ha-055395-m03" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:21.040719  117024 pod_ready.go:82] duration metric: took 398.313858ms for pod "kube-controller-manager-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.040730  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52vmd" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.237315  117024 request.go:632] Waited for 196.499841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52vmd
	I0826 11:06:21.237377  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-52vmd
	I0826 11:06:21.237384  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.237395  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.237400  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.240742  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.436784  117024 request.go:632] Waited for 195.332846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:21.436868  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:21.436875  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.436886  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.436892  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.440708  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.441274  117024 pod_ready.go:93] pod "kube-proxy-52vmd" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:21.441294  117024 pod_ready.go:82] duration metric: took 400.557073ms for pod "kube-proxy-52vmd" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.441308  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g45pb" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.637557  117024 request.go:632] Waited for 196.170343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g45pb
	I0826 11:06:21.637645  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g45pb
	I0826 11:06:21.637651  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.637658  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.637661  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.642756  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:21.836985  117024 request.go:632] Waited for 193.407328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:21.837058  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:21.837066  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:21.837076  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:21.837085  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:21.840577  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:21.841247  117024 pod_ready.go:93] pod "kube-proxy-g45pb" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:21.841269  117024 pod_ready.go:82] duration metric: took 399.95227ms for pod "kube-proxy-g45pb" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:21.841279  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zl5bm" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.036701  117024 request.go:632] Waited for 195.350804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zl5bm
	I0826 11:06:22.036785  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zl5bm
	I0826 11:06:22.036790  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.036806  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.036824  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.040409  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:22.237617  117024 request.go:632] Waited for 196.424222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:22.237699  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:22.237706  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.237717  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.237722  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.241336  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:22.242092  117024 pod_ready.go:93] pod "kube-proxy-zl5bm" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:22.242112  117024 pod_ready.go:82] duration metric: took 400.82761ms for pod "kube-proxy-zl5bm" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.242122  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.437229  117024 request.go:632] Waited for 195.016866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395
	I0826 11:06:22.437295  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395
	I0826 11:06:22.437300  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.437308  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.437312  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.441030  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:22.637598  117024 request.go:632] Waited for 195.482711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:22.637676  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395
	I0826 11:06:22.637682  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.637689  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.637694  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.641467  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:22.642037  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:22.642054  117024 pod_ready.go:82] duration metric: took 399.926666ms for pod "kube-scheduler-ha-055395" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.642064  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:22.837339  117024 request.go:632] Waited for 195.191726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m02
	I0826 11:06:22.837410  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m02
	I0826 11:06:22.837415  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:22.837422  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:22.837427  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:22.841838  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:23.036722  117024 request.go:632] Waited for 194.282073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:23.036805  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m02
	I0826 11:06:23.036811  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.036818  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.036826  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.040709  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:23.041522  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:23.041543  117024 pod_ready.go:82] duration metric: took 399.471152ms for pod "kube-scheduler-ha-055395-m02" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:23.041559  117024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:23.237674  117024 request.go:632] Waited for 196.018809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m03
	I0826 11:06:23.237752  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-055395-m03
	I0826 11:06:23.237758  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.237766  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.237770  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.241372  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:23.437409  117024 request.go:632] Waited for 195.395835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:23.437486  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-055395-m03
	I0826 11:06:23.437492  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.437506  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.437517  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.440863  117024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0826 11:06:23.441579  117024 pod_ready.go:93] pod "kube-scheduler-ha-055395-m03" in "kube-system" namespace has status "Ready":"True"
	I0826 11:06:23.441604  117024 pod_ready.go:82] duration metric: took 400.03879ms for pod "kube-scheduler-ha-055395-m03" in "kube-system" namespace to be "Ready" ...
	I0826 11:06:23.441617  117024 pod_ready.go:39] duration metric: took 5.201013746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:06:23.441633  117024 api_server.go:52] waiting for apiserver process to appear ...
	I0826 11:06:23.441700  117024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:06:23.457907  117024 api_server.go:72] duration metric: took 23.561130355s to wait for apiserver process to appear ...
	I0826 11:06:23.457939  117024 api_server.go:88] waiting for apiserver healthz status ...
	I0826 11:06:23.457966  117024 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0826 11:06:23.462864  117024 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0826 11:06:23.462936  117024 round_trippers.go:463] GET https://192.168.39.150:8443/version
	I0826 11:06:23.462944  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.462952  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.462959  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.463914  117024 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0826 11:06:23.463974  117024 api_server.go:141] control plane version: v1.31.0
	I0826 11:06:23.463988  117024 api_server.go:131] duration metric: took 6.042713ms to wait for apiserver health ...
	I0826 11:06:23.463996  117024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 11:06:23.637440  117024 request.go:632] Waited for 173.339398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:23.637509  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:23.637515  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.637522  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.637526  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.644026  117024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0826 11:06:23.650289  117024 system_pods.go:59] 24 kube-system pods found
	I0826 11:06:23.650323  117024 system_pods.go:61] "coredns-6f6b679f8f-l9bd4" [087dd322-a382-40bc-b631-5744d64ee6b6] Running
	I0826 11:06:23.650328  117024 system_pods.go:61] "coredns-6f6b679f8f-nxb7s" [80b1f99e-a6b9-452f-9e21-b0df08325d56] Running
	I0826 11:06:23.650332  117024 system_pods.go:61] "etcd-ha-055395" [28419734-e4da-4ec0-a7db-0094855feac2] Running
	I0826 11:06:23.650335  117024 system_pods.go:61] "etcd-ha-055395-m02" [9ce0c9b5-4072-4ea1-b326-d7b8b78b578d] Running
	I0826 11:06:23.650338  117024 system_pods.go:61] "etcd-ha-055395-m03" [58ac0f4b-05b2-4304-9a5a-442c4ece6271] Running
	I0826 11:06:23.650341  117024 system_pods.go:61] "kindnet-js2cb" [3364fb33-1685-4137-a94a-b237b8ceb9c6] Running
	I0826 11:06:23.650344  117024 system_pods.go:61] "kindnet-wnz4m" [a1409b32-1fad-47e2-8c6e-97e2d0350e72] Running
	I0826 11:06:23.650347  117024 system_pods.go:61] "kindnet-z2rh2" [f1df8e80-62b7-4a0a-b61a-135b907c101d] Running
	I0826 11:06:23.650350  117024 system_pods.go:61] "kube-apiserver-ha-055395" [2bd78c6d-3ad6-4064-a59b-ade12f446056] Running
	I0826 11:06:23.650353  117024 system_pods.go:61] "kube-apiserver-ha-055395-m02" [9fbaba21-92b7-46e3-8840-9422e4206f59] Running
	I0826 11:06:23.650355  117024 system_pods.go:61] "kube-apiserver-ha-055395-m03" [4499f800-70e2-4864-8871-0f9cd30331b6] Running
	I0826 11:06:23.650358  117024 system_pods.go:61] "kube-controller-manager-ha-055395" [3fce2abe-e401-4c5b-8e0e-53c85390ac76] Running
	I0826 11:06:23.650362  117024 system_pods.go:61] "kube-controller-manager-ha-055395-m02" [4c9f6ebc-407a-4383-bf5f-0c91903ba213] Running
	I0826 11:06:23.650364  117024 system_pods.go:61] "kube-controller-manager-ha-055395-m03" [0e15ae3e-1330-4624-9c7d-019886111312] Running
	I0826 11:06:23.650367  117024 system_pods.go:61] "kube-proxy-52vmd" [3c3c5e99-eaf5-41ef-a319-de13b16b4936] Running
	I0826 11:06:23.650370  117024 system_pods.go:61] "kube-proxy-g45pb" [0e2dc897-60b1-4d06-a4e4-30136a39a224] Running
	I0826 11:06:23.650373  117024 system_pods.go:61] "kube-proxy-zl5bm" [bed428b3-57e8-4704-a1fd-b3db1b3e4d6c] Running
	I0826 11:06:23.650375  117024 system_pods.go:61] "kube-scheduler-ha-055395" [6ce30f64-767d-422b-8bf7-40ebc2179dcb] Running
	I0826 11:06:23.650378  117024 system_pods.go:61] "kube-scheduler-ha-055395-m02" [4d95a077-6a4d-4639-bb52-58b369107c66] Running
	I0826 11:06:23.650381  117024 system_pods.go:61] "kube-scheduler-ha-055395-m03" [c63e9b31-fade-466b-87a4-661fba5e0e61] Running
	I0826 11:06:23.650383  117024 system_pods.go:61] "kube-vip-ha-055395" [72a93d75-67e0-4605-81c3-f1ed830fd5eb] Running
	I0826 11:06:23.650386  117024 system_pods.go:61] "kube-vip-ha-055395-m02" [14132392-e3db-4ad5-b608-ed22e36d856b] Running
	I0826 11:06:23.650388  117024 system_pods.go:61] "kube-vip-ha-055395-m03" [7dc9fbef-3a6f-4570-8e14-e3bbe1e7cab7] Running
	I0826 11:06:23.650392  117024 system_pods.go:61] "storage-provisioner" [5bf3fea9-2562-4769-944b-72472da24419] Running
	I0826 11:06:23.650398  117024 system_pods.go:74] duration metric: took 186.396638ms to wait for pod list to return data ...
	I0826 11:06:23.650406  117024 default_sa.go:34] waiting for default service account to be created ...
	I0826 11:06:23.836798  117024 request.go:632] Waited for 186.304297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0826 11:06:23.836874  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0826 11:06:23.836880  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:23.836887  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:23.836892  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:23.841344  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:23.841466  117024 default_sa.go:45] found service account: "default"
	I0826 11:06:23.841479  117024 default_sa.go:55] duration metric: took 191.067398ms for default service account to be created ...
	I0826 11:06:23.841488  117024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 11:06:24.036961  117024 request.go:632] Waited for 195.394858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:24.037050  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0826 11:06:24.037058  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:24.037067  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:24.037073  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:24.042393  117024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0826 11:06:24.049055  117024 system_pods.go:86] 24 kube-system pods found
	I0826 11:06:24.049087  117024 system_pods.go:89] "coredns-6f6b679f8f-l9bd4" [087dd322-a382-40bc-b631-5744d64ee6b6] Running
	I0826 11:06:24.049096  117024 system_pods.go:89] "coredns-6f6b679f8f-nxb7s" [80b1f99e-a6b9-452f-9e21-b0df08325d56] Running
	I0826 11:06:24.049102  117024 system_pods.go:89] "etcd-ha-055395" [28419734-e4da-4ec0-a7db-0094855feac2] Running
	I0826 11:06:24.049108  117024 system_pods.go:89] "etcd-ha-055395-m02" [9ce0c9b5-4072-4ea1-b326-d7b8b78b578d] Running
	I0826 11:06:24.049114  117024 system_pods.go:89] "etcd-ha-055395-m03" [58ac0f4b-05b2-4304-9a5a-442c4ece6271] Running
	I0826 11:06:24.049119  117024 system_pods.go:89] "kindnet-js2cb" [3364fb33-1685-4137-a94a-b237b8ceb9c6] Running
	I0826 11:06:24.049124  117024 system_pods.go:89] "kindnet-wnz4m" [a1409b32-1fad-47e2-8c6e-97e2d0350e72] Running
	I0826 11:06:24.049129  117024 system_pods.go:89] "kindnet-z2rh2" [f1df8e80-62b7-4a0a-b61a-135b907c101d] Running
	I0826 11:06:24.049134  117024 system_pods.go:89] "kube-apiserver-ha-055395" [2bd78c6d-3ad6-4064-a59b-ade12f446056] Running
	I0826 11:06:24.049139  117024 system_pods.go:89] "kube-apiserver-ha-055395-m02" [9fbaba21-92b7-46e3-8840-9422e4206f59] Running
	I0826 11:06:24.049146  117024 system_pods.go:89] "kube-apiserver-ha-055395-m03" [4499f800-70e2-4864-8871-0f9cd30331b6] Running
	I0826 11:06:24.049151  117024 system_pods.go:89] "kube-controller-manager-ha-055395" [3fce2abe-e401-4c5b-8e0e-53c85390ac76] Running
	I0826 11:06:24.049158  117024 system_pods.go:89] "kube-controller-manager-ha-055395-m02" [4c9f6ebc-407a-4383-bf5f-0c91903ba213] Running
	I0826 11:06:24.049166  117024 system_pods.go:89] "kube-controller-manager-ha-055395-m03" [0e15ae3e-1330-4624-9c7d-019886111312] Running
	I0826 11:06:24.049175  117024 system_pods.go:89] "kube-proxy-52vmd" [3c3c5e99-eaf5-41ef-a319-de13b16b4936] Running
	I0826 11:06:24.049182  117024 system_pods.go:89] "kube-proxy-g45pb" [0e2dc897-60b1-4d06-a4e4-30136a39a224] Running
	I0826 11:06:24.049189  117024 system_pods.go:89] "kube-proxy-zl5bm" [bed428b3-57e8-4704-a1fd-b3db1b3e4d6c] Running
	I0826 11:06:24.049194  117024 system_pods.go:89] "kube-scheduler-ha-055395" [6ce30f64-767d-422b-8bf7-40ebc2179dcb] Running
	I0826 11:06:24.049200  117024 system_pods.go:89] "kube-scheduler-ha-055395-m02" [4d95a077-6a4d-4639-bb52-58b369107c66] Running
	I0826 11:06:24.049208  117024 system_pods.go:89] "kube-scheduler-ha-055395-m03" [c63e9b31-fade-466b-87a4-661fba5e0e61] Running
	I0826 11:06:24.049216  117024 system_pods.go:89] "kube-vip-ha-055395" [72a93d75-67e0-4605-81c3-f1ed830fd5eb] Running
	I0826 11:06:24.049224  117024 system_pods.go:89] "kube-vip-ha-055395-m02" [14132392-e3db-4ad5-b608-ed22e36d856b] Running
	I0826 11:06:24.049230  117024 system_pods.go:89] "kube-vip-ha-055395-m03" [7dc9fbef-3a6f-4570-8e14-e3bbe1e7cab7] Running
	I0826 11:06:24.049235  117024 system_pods.go:89] "storage-provisioner" [5bf3fea9-2562-4769-944b-72472da24419] Running
	I0826 11:06:24.049245  117024 system_pods.go:126] duration metric: took 207.750065ms to wait for k8s-apps to be running ...
	I0826 11:06:24.049259  117024 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 11:06:24.049317  117024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:06:24.065236  117024 system_svc.go:56] duration metric: took 15.963207ms WaitForService to wait for kubelet
	I0826 11:06:24.065277  117024 kubeadm.go:582] duration metric: took 24.168505094s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:06:24.065323  117024 node_conditions.go:102] verifying NodePressure condition ...
	I0826 11:06:24.237107  117024 request.go:632] Waited for 171.674022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes
	I0826 11:06:24.237166  117024 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes
	I0826 11:06:24.237171  117024 round_trippers.go:469] Request Headers:
	I0826 11:06:24.237178  117024 round_trippers.go:473]     Accept: application/json, */*
	I0826 11:06:24.237183  117024 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0826 11:06:24.241375  117024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0826 11:06:24.242231  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:06:24.242251  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:06:24.242262  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:06:24.242265  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:06:24.242269  117024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:06:24.242272  117024 node_conditions.go:123] node cpu capacity is 2
	I0826 11:06:24.242276  117024 node_conditions.go:105] duration metric: took 176.947306ms to run NodePressure ...
	I0826 11:06:24.242287  117024 start.go:241] waiting for startup goroutines ...
	I0826 11:06:24.242309  117024 start.go:255] writing updated cluster config ...
	I0826 11:06:24.242597  117024 ssh_runner.go:195] Run: rm -f paused
	I0826 11:06:24.297402  117024 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 11:06:24.299546  117024 out.go:177] * Done! kubectl is now configured to use "ha-055395" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.034963779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670662034941092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5549b1ea-ef67-4d60-aa0b-caa075e9231c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.035671253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c5b1b77-26de-4c2d-b618-1df401daf02d name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.035726576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c5b1b77-26de-4c2d-b618-1df401daf02d name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.036010425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670388552239950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252440950659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252404169206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307243c699fa9b66da2de1b5fdbd580fc20a97a961555faa4c916427517feeaf,PodSandboxId:21c0385083f3815307e2709e0449fdd9c00d8ed519a25e8f762488c338593aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724670252314333976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724670240453524239,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172467023
6587408693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e,PodSandboxId:862ecb4417c554988c653e82b6413ff1bd0b05dfb072e6ac7d1c74fccee090d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172467022741
5092475,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfb8a00dbd999308581413a12e69784,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670224881902594,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670224828948129,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b,PodSandboxId:bac675258d360620f9e642b72f7188ff9798375b5e377c44ca66f910838cf433,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670224758842886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5,PodSandboxId:3eb49d746b20e3f7254aa34c0a9686eb08fa5179c853e497fdabfde7fd3959fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670224743530164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c5b1b77-26de-4c2d-b618-1df401daf02d name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.073204264Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba1a581d-a486-46a6-8c43-a0e922806cc7 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.073389365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba1a581d-a486-46a6-8c43-a0e922806cc7 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.075093656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdf3d6a3-a72e-4cf8-abf3-d8395fc9319b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.075653631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670662075629339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdf3d6a3-a72e-4cf8-abf3-d8395fc9319b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.076196046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29b8a824-4c85-4c48-8dc5-e7f3c3b81024 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.076267181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29b8a824-4c85-4c48-8dc5-e7f3c3b81024 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.076524204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670388552239950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252440950659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252404169206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307243c699fa9b66da2de1b5fdbd580fc20a97a961555faa4c916427517feeaf,PodSandboxId:21c0385083f3815307e2709e0449fdd9c00d8ed519a25e8f762488c338593aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724670252314333976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724670240453524239,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172467023
6587408693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e,PodSandboxId:862ecb4417c554988c653e82b6413ff1bd0b05dfb072e6ac7d1c74fccee090d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172467022741
5092475,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfb8a00dbd999308581413a12e69784,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670224881902594,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670224828948129,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b,PodSandboxId:bac675258d360620f9e642b72f7188ff9798375b5e377c44ca66f910838cf433,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670224758842886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5,PodSandboxId:3eb49d746b20e3f7254aa34c0a9686eb08fa5179c853e497fdabfde7fd3959fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670224743530164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29b8a824-4c85-4c48-8dc5-e7f3c3b81024 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.113413013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dad98255-3149-49a8-8798-c6f4b8c7d5f0 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.113488955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dad98255-3149-49a8-8798-c6f4b8c7d5f0 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.115421160Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a92900e-8dcf-40c7-8582-424c19bcb937 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.116117735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670662116089346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a92900e-8dcf-40c7-8582-424c19bcb937 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.116630108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ec067d5-0592-45fe-ae4b-b7dbb5a98def name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.116697677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ec067d5-0592-45fe-ae4b-b7dbb5a98def name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.117019023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670388552239950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252440950659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252404169206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307243c699fa9b66da2de1b5fdbd580fc20a97a961555faa4c916427517feeaf,PodSandboxId:21c0385083f3815307e2709e0449fdd9c00d8ed519a25e8f762488c338593aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724670252314333976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724670240453524239,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172467023
6587408693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e,PodSandboxId:862ecb4417c554988c653e82b6413ff1bd0b05dfb072e6ac7d1c74fccee090d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172467022741
5092475,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfb8a00dbd999308581413a12e69784,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670224881902594,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670224828948129,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b,PodSandboxId:bac675258d360620f9e642b72f7188ff9798375b5e377c44ca66f910838cf433,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670224758842886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5,PodSandboxId:3eb49d746b20e3f7254aa34c0a9686eb08fa5179c853e497fdabfde7fd3959fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670224743530164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ec067d5-0592-45fe-ae4b-b7dbb5a98def name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.155688881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9266d333-dbb3-409c-9dbe-85751f2d667e name=/runtime.v1.RuntimeService/Version
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.155817136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9266d333-dbb3-409c-9dbe-85751f2d667e name=/runtime.v1.RuntimeService/Version
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.157302360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0af059b-aa93-4569-8b5c-2bb9a3cde05c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.158172304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670662158144133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0af059b-aa93-4569-8b5c-2bb9a3cde05c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.159175057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4547f7ef-bf55-4c22-bc0c-976600692f93 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.159272330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4547f7ef-bf55-4c22-bc0c-976600692f93 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:11:02 ha-055395 crio[676]: time="2024-08-26 11:11:02.159621437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670388552239950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252440950659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670252404169206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307243c699fa9b66da2de1b5fdbd580fc20a97a961555faa4c916427517feeaf,PodSandboxId:21c0385083f3815307e2709e0449fdd9c00d8ed519a25e8f762488c338593aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724670252314333976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724670240453524239,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172467023
6587408693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e,PodSandboxId:862ecb4417c554988c653e82b6413ff1bd0b05dfb072e6ac7d1c74fccee090d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172467022741
5092475,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfb8a00dbd999308581413a12e69784,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670224881902594,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670224828948129,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b,PodSandboxId:bac675258d360620f9e642b72f7188ff9798375b5e377c44ca66f910838cf433,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670224758842886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5,PodSandboxId:3eb49d746b20e3f7254aa34c0a9686eb08fa5179c853e497fdabfde7fd3959fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670224743530164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4547f7ef-bf55-4c22-bc0c-976600692f93 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2f106e1bd830       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   a356e619a2186       busybox-7dff88458-xh6vw
	588201165ca01       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   73e7528d83ce5       coredns-6f6b679f8f-nxb7s
	9fdad1c79bb41       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   3593c2f74b608       coredns-6f6b679f8f-l9bd4
	307243c699fa9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   21c0385083f38       storage-provisioner
	d5ffe25b55c8a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   3f092331272f7       kindnet-z2rh2
	4518376ec7b4a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   dd6c20478efce       kube-proxy-g45pb
	d4490a4c3fa0b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   862ecb4417c55       kube-vip-ha-055395
	9f71e1964ec11       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   d03f237462672       kube-scheduler-ha-055395
	9500eb08ad452       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   40a84124456a3       etcd-ha-055395
	bcd57c7d0ba05       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   bac675258d360       kube-controller-manager-ha-055395
	37bbfc44887fa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   3eb49d746b20e       kube-apiserver-ha-055395
	
	
	==> coredns [588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9] <==
	[INFO] 10.244.1.2:59222 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002106896s
	[INFO] 10.244.1.2:42031 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136805s
	[INFO] 10.244.1.2:48240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195092s
	[INFO] 10.244.1.2:39354 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001428637s
	[INFO] 10.244.1.2:38981 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143058s
	[INFO] 10.244.1.2:42169 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00025738s
	[INFO] 10.244.0.4:39980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128242s
	[INFO] 10.244.0.4:57380 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001955064s
	[INFO] 10.244.0.4:60257 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001538811s
	[INFO] 10.244.0.4:60079 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000036286s
	[INFO] 10.244.0.4:50624 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103476s
	[INFO] 10.244.0.4:46611 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034392s
	[INFO] 10.244.3.2:52234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158227s
	[INFO] 10.244.3.2:51370 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133305s
	[INFO] 10.244.3.2:40430 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145691s
	[INFO] 10.244.3.2:50269 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000221739s
	[INFO] 10.244.1.2:49573 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010888s
	[INFO] 10.244.0.4:49284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199959s
	[INFO] 10.244.3.2:38694 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112066s
	[INFO] 10.244.3.2:55559 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116423s
	[INFO] 10.244.1.2:38712 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000274813s
	[INFO] 10.244.1.2:38536 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091302s
	[INFO] 10.244.0.4:35805 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089054s
	[INFO] 10.244.0.4:53560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109072s
	[INFO] 10.244.0.4:50886 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061358s
	
	
	==> coredns [9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e] <==
	[INFO] 127.0.0.1:33483 - 35199 "HINFO IN 6318060826605411215.8532303163548737398. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011519809s
	[INFO] 10.244.3.2:42757 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011759911s
	[INFO] 10.244.0.4:48529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225855s
	[INFO] 10.244.0.4:39187 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001675593s
	[INFO] 10.244.0.4:36731 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000357451s
	[INFO] 10.244.0.4:57644 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001630694s
	[INFO] 10.244.3.2:35262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118796s
	[INFO] 10.244.3.2:56831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004452512s
	[INFO] 10.244.3.2:50141 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195651s
	[INFO] 10.244.3.2:52724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157926s
	[INFO] 10.244.3.2:48168 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135307s
	[INFO] 10.244.1.2:49021 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099106s
	[INFO] 10.244.0.4:33653 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173931s
	[INFO] 10.244.0.4:49095 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089973s
	[INFO] 10.244.1.2:60072 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132366s
	[INFO] 10.244.1.2:45712 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081817s
	[INFO] 10.244.1.2:47110 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082159s
	[INFO] 10.244.0.4:48619 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100933s
	[INFO] 10.244.0.4:37358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069397s
	[INFO] 10.244.0.4:46981 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092796s
	[INFO] 10.244.3.2:59777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240921s
	[INFO] 10.244.3.2:44319 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002198s
	[INFO] 10.244.1.2:48438 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216864s
	[INFO] 10.244.1.2:45176 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133331s
	[INFO] 10.244.0.4:41108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112163s
	
	
	==> describe nodes <==
	Name:               ha-055395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_03_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:03:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:11:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:06:54 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:06:54 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:06:54 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:06:54 +0000   Mon, 26 Aug 2024 11:04:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-055395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 68841a7ef08f47a386553bd433710191
	  System UUID:                68841a7e-f08f-47a3-8655-3bd433710191
	  Boot ID:                    be93c222-ff08-41d5-baae-cb87ba3b44cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xh6vw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 coredns-6f6b679f8f-l9bd4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m6s
	  kube-system                 coredns-6f6b679f8f-nxb7s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m6s
	  kube-system                 etcd-ha-055395                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m11s
	  kube-system                 kindnet-z2rh2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m7s
	  kube-system                 kube-apiserver-ha-055395             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-controller-manager-ha-055395    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-proxy-g45pb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-scheduler-ha-055395             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-vip-ha-055395                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m5s   kube-proxy       
	  Normal  Starting                 7m11s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m11s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m11s  kubelet          Node ha-055395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m11s  kubelet          Node ha-055395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s  kubelet          Node ha-055395 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m8s   node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal  NodeReady                6m51s  kubelet          Node ha-055395 status is now: NodeReady
	  Normal  RegisteredNode           6m9s   node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal  RegisteredNode           4m58s  node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	
	
	Name:               ha-055395-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_04_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:04:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:07:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 26 Aug 2024 11:06:48 +0000   Mon, 26 Aug 2024 11:08:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 26 Aug 2024 11:06:48 +0000   Mon, 26 Aug 2024 11:08:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 26 Aug 2024 11:06:48 +0000   Mon, 26 Aug 2024 11:08:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 26 Aug 2024 11:06:48 +0000   Mon, 26 Aug 2024 11:08:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ha-055395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9151de0e0e3545e983307f4ed75379a4
	  System UUID:                9151de0e-0e35-45e9-8330-7f4ed75379a4
	  Boot ID:                    4303fdb0-210c-4d93-9956-aae5fab451d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gbwm6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 etcd-ha-055395-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-js2cb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-055395-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-055395-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-zl5bm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-055395-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-055395-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     6m17s                  cidrAllocator    Node ha-055395-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node ha-055395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node ha-055395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s (x7 over 6m17s)  kubelet          Node ha-055395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  NodeNotReady             2m43s                  node-controller  Node ha-055395-m02 status is now: NodeNotReady
	
	
	Name:               ha-055395-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_05_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:05:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:11:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:06:57 +0000   Mon, 26 Aug 2024 11:05:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:06:57 +0000   Mon, 26 Aug 2024 11:05:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:06:57 +0000   Mon, 26 Aug 2024 11:05:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:06:57 +0000   Mon, 26 Aug 2024 11:06:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    ha-055395-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 85be43a1fb394f4ea22aa7e3674c88fc
	  System UUID:                85be43a1-fb39-4f4e-a22a-a7e3674c88fc
	  Boot ID:                    f1c6fea4-515c-4231-b6c3-f318551247cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8cc92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 etcd-ha-055395-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m4s
	  kube-system                 kindnet-wnz4m                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m6s
	  kube-system                 kube-apiserver-ha-055395-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-ha-055395-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-52vmd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-scheduler-ha-055395-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-vip-ha-055395-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     5m6s                 cidrAllocator    Node ha-055395-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node ha-055395-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node ha-055395-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x7 over 5m6s)  kubelet          Node ha-055395-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal  RegisteredNode           4m58s                node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	
	
	Name:               ha-055395-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_07_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:07:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:10:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:07:34 +0000   Mon, 26 Aug 2024 11:07:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:07:34 +0000   Mon, 26 Aug 2024 11:07:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:07:34 +0000   Mon, 26 Aug 2024 11:07:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:07:34 +0000   Mon, 26 Aug 2024 11:07:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-055395-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fad8927c4194cf6a2bc5a5e286dfbd0
	  System UUID:                0fad8927-c419-4cf6-a2bc-5a5e286dfbd0
	  Boot ID:                    be3015bb-1b6c-4cf5-9b0d-dc467942896c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-n4gpg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m59s
	  kube-system                 kube-proxy-758wf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m59s                  cidrAllocator    Node ha-055395-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal  CIDRAssignmentFailed     3m59s                  cidrAllocator    Node ha-055395-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m59s (x2 over 3m59s)  kubelet          Node ha-055395-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s (x2 over 3m59s)  kubelet          Node ha-055395-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s (x2 over 3m59s)  kubelet          Node ha-055395-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal  NodeReady                3m38s                  kubelet          Node ha-055395-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug26 11:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050670] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038233] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769390] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.925041] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.551281] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.796386] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.063641] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061452] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.165458] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.147926] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.278562] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.051395] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.884243] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.058746] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.395019] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.102683] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.458120] kauditd_printk_skb: 21 callbacks suppressed
	[Aug26 11:04] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.777933] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5] <==
	{"level":"warn","ts":"2024-08-26T11:11:02.373469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.419893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.428440Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.432666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.443000Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.450481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.458379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.463294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.467860Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.473914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.475172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.483071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.490004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.493833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.499051Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.509109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.517845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.525217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.529114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.532628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.536109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.543163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.551105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.574178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-26T11:11:02.601313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:11:02 up 7 min,  0 users,  load average: 0.16, 0.24, 0.12
	Linux ha-055395 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8] <==
	I0826 11:10:31.526822       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:10:41.527422       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:10:41.527528       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:10:41.527851       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:10:41.527882       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:10:41.527962       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:10:41.527987       1 main.go:299] handling current node
	I0826 11:10:41.528020       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:10:41.528042       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:10:51.532020       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:10:51.532086       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:10:51.532295       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:10:51.532317       1 main.go:299] handling current node
	I0826 11:10:51.532336       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:10:51.532341       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:10:51.532405       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:10:51.532423       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:11:01.524007       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:11:01.524275       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:11:01.524595       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:11:01.524677       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:11:01.524921       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:11:01.525034       1 main.go:299] handling current node
	I0826 11:11:01.525066       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:11:01.525124       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5] <==
	I0826 11:03:50.254958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0826 11:03:51.114973       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0826 11:03:51.138659       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0826 11:03:51.258113       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0826 11:03:55.201433       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0826 11:03:55.964408       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0826 11:05:57.302336       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 18.506µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0826 11:05:57.302422       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="380d2234-45a2-4699-acab-203701593ddb"
	E0826 11:05:57.302490       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.295µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0826 11:06:29.448626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57834: use of closed network connection
	E0826 11:06:29.636511       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57848: use of closed network connection
	E0826 11:06:29.834954       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57880: use of closed network connection
	E0826 11:06:30.035184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57902: use of closed network connection
	E0826 11:06:30.229312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57916: use of closed network connection
	E0826 11:06:30.423379       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57936: use of closed network connection
	E0826 11:06:30.617088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57964: use of closed network connection
	E0826 11:06:30.809409       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34382: use of closed network connection
	E0826 11:06:30.994954       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34392: use of closed network connection
	E0826 11:06:31.301280       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34408: use of closed network connection
	E0826 11:06:31.477594       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34426: use of closed network connection
	E0826 11:06:31.666522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34446: use of closed network connection
	E0826 11:06:31.844190       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34464: use of closed network connection
	E0826 11:06:32.044390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34486: use of closed network connection
	E0826 11:06:32.234076       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34508: use of closed network connection
	W0826 11:08:00.066257       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.209]
	
	
	==> kube-controller-manager [bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b] <==
	E0826 11:07:03.663422       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-055395-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-055395-m04"
	E0826 11:07:03.663538       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-055395-m04': failed to patch node CIDR: Node \"ha-055395-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0826 11:07:03.663685       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:03.669413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:03.746900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:03.797582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:04.180240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:04.890414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:04.952434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:05.067583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:05.068067       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-055395-m04"
	I0826 11:07:05.157653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:13.770341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:24.076215       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-055395-m04"
	I0826 11:07:24.077165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:24.093855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:24.911620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:07:34.105500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:08:19.937723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	I0826 11:08:19.938491       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-055395-m04"
	I0826 11:08:19.968522       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	I0826 11:08:20.124907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.7478ms"
	I0826 11:08:20.125223       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.072µs"
	I0826 11:08:20.151714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	I0826 11:08:25.192443       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	
	
	==> kube-proxy [4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 11:03:56.913791       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 11:03:56.925813       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0826 11:03:56.926041       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 11:03:56.969129       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 11:03:56.969172       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 11:03:56.969202       1 server_linux.go:169] "Using iptables Proxier"
	I0826 11:03:56.971710       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 11:03:56.972143       1 server.go:483] "Version info" version="v1.31.0"
	I0826 11:03:56.972290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:03:56.973983       1 config.go:197] "Starting service config controller"
	I0826 11:03:56.974108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 11:03:56.974188       1 config.go:104] "Starting endpoint slice config controller"
	I0826 11:03:56.974206       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 11:03:56.974826       1 config.go:326] "Starting node config controller"
	I0826 11:03:56.976097       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 11:03:57.075115       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 11:03:57.075149       1 shared_informer.go:320] Caches are synced for service config
	I0826 11:03:57.076414       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3] <==
	I0826 11:05:56.667408       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mnslg" node="ha-055395-m03"
	E0826 11:06:25.317284       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xh6vw\": pod busybox-7dff88458-xh6vw is already assigned to node \"ha-055395\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xh6vw" node="ha-055395"
	E0826 11:06:25.317379       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 94adba85-441f-40d9-bcf2-616b1bd587dc(default/busybox-7dff88458-xh6vw) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xh6vw"
	E0826 11:06:25.317401       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xh6vw\": pod busybox-7dff88458-xh6vw is already assigned to node \"ha-055395\"" pod="default/busybox-7dff88458-xh6vw"
	I0826 11:06:25.317473       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xh6vw" node="ha-055395"
	E0826 11:07:03.627443       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-758wf\": pod kube-proxy-758wf is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-758wf" node="ha-055395-m04"
	E0826 11:07:03.627698       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-758wf\": pod kube-proxy-758wf is already assigned to node \"ha-055395-m04\"" pod="kube-system/kube-proxy-758wf"
	E0826 11:07:03.630860       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n4gpg\": pod kindnet-n4gpg is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-n4gpg" node="ha-055395-m04"
	E0826 11:07:03.630950       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n4gpg\": pod kindnet-n4gpg is already assigned to node \"ha-055395-m04\"" pod="kube-system/kindnet-n4gpg"
	E0826 11:07:03.708033       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xdd9l\": pod kindnet-xdd9l is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xdd9l" node="ha-055395-m04"
	E0826 11:07:03.708220       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-c8476\": pod kube-proxy-c8476 is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-c8476" node="ha-055395-m04"
	E0826 11:07:03.708278       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 21e322d3-c564-4ec6-b66b-e86860280682(kube-system/kube-proxy-c8476) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-c8476"
	E0826 11:07:03.708304       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-c8476\": pod kube-proxy-c8476 is already assigned to node \"ha-055395-m04\"" pod="kube-system/kube-proxy-c8476"
	I0826 11:07:03.708325       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-c8476" node="ha-055395-m04"
	E0826 11:07:03.708436       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a8f1119a-bc1c-46d9-91fd-76553c71f1ff(kube-system/kindnet-xdd9l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-xdd9l"
	E0826 11:07:03.708516       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xdd9l\": pod kindnet-xdd9l is already assigned to node \"ha-055395-m04\"" pod="kube-system/kindnet-xdd9l"
	I0826 11:07:03.708579       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-xdd9l" node="ha-055395-m04"
	E0826 11:07:03.708838       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kkwxm\": pod kube-proxy-kkwxm is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kkwxm" node="ha-055395-m04"
	E0826 11:07:03.708887       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2ef2b044-3278-43d7-8164-a8b51d7f9424(kube-system/kube-proxy-kkwxm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-kkwxm"
	E0826 11:07:03.708901       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kkwxm\": pod kube-proxy-kkwxm is already assigned to node \"ha-055395-m04\"" pod="kube-system/kube-proxy-kkwxm"
	I0826 11:07:03.708919       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kkwxm" node="ha-055395-m04"
	E0826 11:07:03.709603       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ww4xl\": pod kindnet-ww4xl is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ww4xl" node="ha-055395-m04"
	E0826 11:07:03.711019       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 45edff34-de36-493a-9dba-b74e8a326787(kube-system/kindnet-ww4xl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ww4xl"
	E0826 11:07:03.711136       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ww4xl\": pod kindnet-ww4xl is already assigned to node \"ha-055395-m04\"" pod="kube-system/kindnet-ww4xl"
	I0826 11:07:03.711360       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ww4xl" node="ha-055395-m04"
	
	
	==> kubelet <==
	Aug 26 11:09:51 ha-055395 kubelet[1329]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:09:51 ha-055395 kubelet[1329]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:09:51 ha-055395 kubelet[1329]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:09:51 ha-055395 kubelet[1329]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:09:51 ha-055395 kubelet[1329]: E0826 11:09:51.401819    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670591401413183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:09:51 ha-055395 kubelet[1329]: E0826 11:09:51.401859    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670591401413183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:01 ha-055395 kubelet[1329]: E0826 11:10:01.404157    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670601403727960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:01 ha-055395 kubelet[1329]: E0826 11:10:01.404209    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670601403727960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:11 ha-055395 kubelet[1329]: E0826 11:10:11.405557    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670611405289624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:11 ha-055395 kubelet[1329]: E0826 11:10:11.405581    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670611405289624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:21 ha-055395 kubelet[1329]: E0826 11:10:21.407598    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670621407239756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:21 ha-055395 kubelet[1329]: E0826 11:10:21.407638    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670621407239756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:31 ha-055395 kubelet[1329]: E0826 11:10:31.409451    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670631409072836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:31 ha-055395 kubelet[1329]: E0826 11:10:31.409493    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670631409072836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:41 ha-055395 kubelet[1329]: E0826 11:10:41.411400    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670641410987202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:41 ha-055395 kubelet[1329]: E0826 11:10:41.411876    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670641410987202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:51 ha-055395 kubelet[1329]: E0826 11:10:51.275562    1329 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 11:10:51 ha-055395 kubelet[1329]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:10:51 ha-055395 kubelet[1329]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:10:51 ha-055395 kubelet[1329]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:10:51 ha-055395 kubelet[1329]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:10:51 ha-055395 kubelet[1329]: E0826 11:10:51.413650    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670651413314393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:10:51 ha-055395 kubelet[1329]: E0826 11:10:51.413674    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670651413314393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:11:01 ha-055395 kubelet[1329]: E0826 11:11:01.414897    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670661414575065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:11:01 ha-055395 kubelet[1329]: E0826 11:11:01.414933    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724670661414575065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-055395 -n ha-055395
helpers_test.go:261: (dbg) Run:  kubectl --context ha-055395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (395.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-055395 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-055395 -v=7 --alsologtostderr
E0826 11:12:20.477051  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:12:48.179810  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-055395 -v=7 --alsologtostderr: exit status 82 (2m1.889853135s)

                                                
                                                
-- stdout --
	* Stopping node "ha-055395-m04"  ...
	* Stopping node "ha-055395-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:11:04.077333  122723 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:11:04.077592  122723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:11:04.077601  122723 out.go:358] Setting ErrFile to fd 2...
	I0826 11:11:04.077605  122723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:11:04.077771  122723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:11:04.077978  122723 out.go:352] Setting JSON to false
	I0826 11:11:04.078077  122723 mustload.go:65] Loading cluster: ha-055395
	I0826 11:11:04.078428  122723 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:11:04.078523  122723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:11:04.078712  122723 mustload.go:65] Loading cluster: ha-055395
	I0826 11:11:04.078868  122723 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:11:04.078913  122723 stop.go:39] StopHost: ha-055395-m04
	I0826 11:11:04.079294  122723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:04.079338  122723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:04.094751  122723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0826 11:11:04.095302  122723 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:04.095936  122723 main.go:141] libmachine: Using API Version  1
	I0826 11:11:04.095964  122723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:04.096384  122723 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:04.099355  122723 out.go:177] * Stopping node "ha-055395-m04"  ...
	I0826 11:11:04.100816  122723 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0826 11:11:04.100848  122723 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:11:04.101133  122723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0826 11:11:04.101174  122723 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:11:04.104183  122723 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:11:04.104689  122723 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:06:47 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:11:04.104726  122723 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:11:04.104831  122723 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:11:04.105026  122723 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:11:04.105180  122723 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:11:04.105387  122723 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:11:04.193324  122723 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0826 11:11:04.247189  122723 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0826 11:11:04.300602  122723 main.go:141] libmachine: Stopping "ha-055395-m04"...
	I0826 11:11:04.300676  122723 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:11:04.302446  122723 main.go:141] libmachine: (ha-055395-m04) Calling .Stop
	I0826 11:11:04.306196  122723 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 0/120
	I0826 11:11:05.482494  122723 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:11:05.483689  122723 main.go:141] libmachine: Machine "ha-055395-m04" was stopped.
	I0826 11:11:05.483709  122723 stop.go:75] duration metric: took 1.382895488s to stop
	I0826 11:11:05.483734  122723 stop.go:39] StopHost: ha-055395-m03
	I0826 11:11:05.484046  122723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:11:05.484095  122723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:11:05.499097  122723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0826 11:11:05.499573  122723 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:11:05.500089  122723 main.go:141] libmachine: Using API Version  1
	I0826 11:11:05.500121  122723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:11:05.500499  122723 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:11:05.503006  122723 out.go:177] * Stopping node "ha-055395-m03"  ...
	I0826 11:11:05.504587  122723 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0826 11:11:05.504619  122723 main.go:141] libmachine: (ha-055395-m03) Calling .DriverName
	I0826 11:11:05.504923  122723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0826 11:11:05.504956  122723 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHHostname
	I0826 11:11:05.507973  122723 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:11:05.508464  122723 main.go:141] libmachine: (ha-055395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:85:18", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:05:24 +0000 UTC Type:0 Mac:52:54:00:66:85:18 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-055395-m03 Clientid:01:52:54:00:66:85:18}
	I0826 11:11:05.508540  122723 main.go:141] libmachine: (ha-055395-m03) DBG | domain ha-055395-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:66:85:18 in network mk-ha-055395
	I0826 11:11:05.508638  122723 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHPort
	I0826 11:11:05.508827  122723 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHKeyPath
	I0826 11:11:05.508998  122723 main.go:141] libmachine: (ha-055395-m03) Calling .GetSSHUsername
	I0826 11:11:05.509146  122723 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m03/id_rsa Username:docker}
	I0826 11:11:05.602550  122723 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0826 11:11:05.655040  122723 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0826 11:11:05.709243  122723 main.go:141] libmachine: Stopping "ha-055395-m03"...
	I0826 11:11:05.709266  122723 main.go:141] libmachine: (ha-055395-m03) Calling .GetState
	I0826 11:11:05.710797  122723 main.go:141] libmachine: (ha-055395-m03) Calling .Stop
	I0826 11:11:05.714713  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 0/120
	I0826 11:11:06.716225  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 1/120
	I0826 11:11:07.717873  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 2/120
	I0826 11:11:08.719377  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 3/120
	I0826 11:11:09.721405  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 4/120
	I0826 11:11:10.723612  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 5/120
	I0826 11:11:11.724933  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 6/120
	I0826 11:11:12.726320  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 7/120
	I0826 11:11:13.727773  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 8/120
	I0826 11:11:14.729511  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 9/120
	I0826 11:11:15.731724  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 10/120
	I0826 11:11:16.733596  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 11/120
	I0826 11:11:17.735276  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 12/120
	I0826 11:11:18.736809  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 13/120
	I0826 11:11:19.738279  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 14/120
	I0826 11:11:20.740229  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 15/120
	I0826 11:11:21.742012  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 16/120
	I0826 11:11:22.743650  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 17/120
	I0826 11:11:23.745346  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 18/120
	I0826 11:11:24.747274  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 19/120
	I0826 11:11:25.749554  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 20/120
	I0826 11:11:26.751121  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 21/120
	I0826 11:11:27.752716  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 22/120
	I0826 11:11:28.754665  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 23/120
	I0826 11:11:29.756216  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 24/120
	I0826 11:11:30.758253  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 25/120
	I0826 11:11:31.760156  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 26/120
	I0826 11:11:32.761550  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 27/120
	I0826 11:11:33.763477  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 28/120
	I0826 11:11:34.765494  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 29/120
	I0826 11:11:35.767494  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 30/120
	I0826 11:11:36.769126  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 31/120
	I0826 11:11:37.770913  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 32/120
	I0826 11:11:38.772124  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 33/120
	I0826 11:11:39.773565  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 34/120
	I0826 11:11:40.776157  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 35/120
	I0826 11:11:41.777604  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 36/120
	I0826 11:11:42.779567  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 37/120
	I0826 11:11:43.780996  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 38/120
	I0826 11:11:44.783071  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 39/120
	I0826 11:11:45.785062  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 40/120
	I0826 11:11:46.786574  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 41/120
	I0826 11:11:47.788167  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 42/120
	I0826 11:11:48.789558  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 43/120
	I0826 11:11:49.791214  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 44/120
	I0826 11:11:50.793524  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 45/120
	I0826 11:11:51.795015  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 46/120
	I0826 11:11:52.796448  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 47/120
	I0826 11:11:53.797811  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 48/120
	I0826 11:11:54.800185  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 49/120
	I0826 11:11:55.801962  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 50/120
	I0826 11:11:56.803498  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 51/120
	I0826 11:11:57.804850  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 52/120
	I0826 11:11:58.806956  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 53/120
	I0826 11:11:59.808407  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 54/120
	I0826 11:12:00.810638  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 55/120
	I0826 11:12:01.812257  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 56/120
	I0826 11:12:02.813988  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 57/120
	I0826 11:12:03.815442  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 58/120
	I0826 11:12:04.817402  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 59/120
	I0826 11:12:05.819433  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 60/120
	I0826 11:12:06.821198  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 61/120
	I0826 11:12:07.823088  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 62/120
	I0826 11:12:08.824578  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 63/120
	I0826 11:12:09.826069  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 64/120
	I0826 11:12:10.827946  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 65/120
	I0826 11:12:11.829485  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 66/120
	I0826 11:12:12.830979  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 67/120
	I0826 11:12:13.833224  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 68/120
	I0826 11:12:14.834625  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 69/120
	I0826 11:12:15.836698  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 70/120
	I0826 11:12:16.837902  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 71/120
	I0826 11:12:17.839534  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 72/120
	I0826 11:12:18.841402  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 73/120
	I0826 11:12:19.843000  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 74/120
	I0826 11:12:20.844501  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 75/120
	I0826 11:12:21.846642  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 76/120
	I0826 11:12:22.848381  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 77/120
	I0826 11:12:23.849996  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 78/120
	I0826 11:12:24.851729  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 79/120
	I0826 11:12:25.854106  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 80/120
	I0826 11:12:26.855948  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 81/120
	I0826 11:12:27.857350  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 82/120
	I0826 11:12:28.858813  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 83/120
	I0826 11:12:29.860261  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 84/120
	I0826 11:12:30.862235  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 85/120
	I0826 11:12:31.863766  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 86/120
	I0826 11:12:32.865501  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 87/120
	I0826 11:12:33.867101  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 88/120
	I0826 11:12:34.868683  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 89/120
	I0826 11:12:35.870089  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 90/120
	I0826 11:12:36.871607  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 91/120
	I0826 11:12:37.873100  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 92/120
	I0826 11:12:38.874755  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 93/120
	I0826 11:12:39.876183  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 94/120
	I0826 11:12:40.878116  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 95/120
	I0826 11:12:41.879595  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 96/120
	I0826 11:12:42.881150  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 97/120
	I0826 11:12:43.882488  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 98/120
	I0826 11:12:44.884034  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 99/120
	I0826 11:12:45.886068  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 100/120
	I0826 11:12:46.887567  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 101/120
	I0826 11:12:47.889162  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 102/120
	I0826 11:12:48.890889  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 103/120
	I0826 11:12:49.892560  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 104/120
	I0826 11:12:50.894585  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 105/120
	I0826 11:12:51.895999  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 106/120
	I0826 11:12:52.897238  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 107/120
	I0826 11:12:53.898913  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 108/120
	I0826 11:12:54.900660  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 109/120
	I0826 11:12:55.902460  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 110/120
	I0826 11:12:56.904028  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 111/120
	I0826 11:12:57.905935  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 112/120
	I0826 11:12:58.907286  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 113/120
	I0826 11:12:59.908653  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 114/120
	I0826 11:13:00.910047  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 115/120
	I0826 11:13:01.911903  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 116/120
	I0826 11:13:02.913279  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 117/120
	I0826 11:13:03.914639  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 118/120
	I0826 11:13:04.916390  122723 main.go:141] libmachine: (ha-055395-m03) Waiting for machine to stop 119/120
	I0826 11:13:05.917037  122723 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0826 11:13:05.917091  122723 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0826 11:13:05.919048  122723 out.go:201] 
	W0826 11:13:05.920332  122723 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0826 11:13:05.920347  122723 out.go:270] * 
	* 
	W0826 11:13:05.922775  122723 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 11:13:05.924152  122723 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-055395 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-055395 --wait=true -v=7 --alsologtostderr
E0826 11:14:34.326629  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:15:57.392815  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:17:20.477148  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-055395 --wait=true -v=7 --alsologtostderr: (4m30.568543504s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-055395
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-055395 -n ha-055395
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-055395 logs -n 25: (1.853166206s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m02:/home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m02 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04:/home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m04 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp testdata/cp-test.txt                                                | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395:/home/docker/cp-test_ha-055395-m04_ha-055395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395 sudo cat                                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m02:/home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m02 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03:/home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m03 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-055395 node stop m02 -v=7                                                     | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-055395 node start m02 -v=7                                                    | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-055395 -v=7                                                           | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-055395 -v=7                                                                | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-055395 --wait=true -v=7                                                    | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:13 UTC | 26 Aug 24 11:17 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-055395                                                                | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:17 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 11:13:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 11:13:05.972526  123193 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:13:05.972664  123193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:13:05.972674  123193 out.go:358] Setting ErrFile to fd 2...
	I0826 11:13:05.972678  123193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:13:05.972905  123193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:13:05.973505  123193 out.go:352] Setting JSON to false
	I0826 11:13:05.974451  123193 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3327,"bootTime":1724667459,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:13:05.974518  123193 start.go:139] virtualization: kvm guest
	I0826 11:13:05.980809  123193 out.go:177] * [ha-055395] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:13:05.986500  123193 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:13:05.986504  123193 notify.go:220] Checking for updates...
	I0826 11:13:05.989822  123193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:13:05.991398  123193 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:13:05.992722  123193 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:13:05.994201  123193 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:13:05.995819  123193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:13:05.997945  123193 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:13:05.998078  123193 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:13:05.998723  123193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:13:05.998819  123193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:13:06.015878  123193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0826 11:13:06.016561  123193 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:13:06.017181  123193 main.go:141] libmachine: Using API Version  1
	I0826 11:13:06.017208  123193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:13:06.017647  123193 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:13:06.017850  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:13:06.059222  123193 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 11:13:06.060846  123193 start.go:297] selected driver: kvm2
	I0826 11:13:06.060871  123193 start.go:901] validating driver "kvm2" against &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:13:06.061034  123193 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:13:06.061389  123193 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:13:06.061486  123193 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:13:06.077452  123193 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:13:06.078262  123193 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:13:06.078324  123193 cni.go:84] Creating CNI manager for ""
	I0826 11:13:06.078336  123193 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0826 11:13:06.078394  123193 start.go:340] cluster config:
	{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:13:06.078577  123193 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:13:06.080853  123193 out.go:177] * Starting "ha-055395" primary control-plane node in "ha-055395" cluster
	I0826 11:13:06.082350  123193 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:13:06.082396  123193 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:13:06.082409  123193 cache.go:56] Caching tarball of preloaded images
	I0826 11:13:06.082515  123193 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:13:06.082526  123193 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:13:06.082658  123193 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:13:06.082904  123193 start.go:360] acquireMachinesLock for ha-055395: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:13:06.082956  123193 start.go:364] duration metric: took 30.732µs to acquireMachinesLock for "ha-055395"
	I0826 11:13:06.082977  123193 start.go:96] Skipping create...Using existing machine configuration
	I0826 11:13:06.082985  123193 fix.go:54] fixHost starting: 
	I0826 11:13:06.083232  123193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:13:06.083271  123193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:13:06.098695  123193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0826 11:13:06.099178  123193 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:13:06.099767  123193 main.go:141] libmachine: Using API Version  1
	I0826 11:13:06.099815  123193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:13:06.100245  123193 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:13:06.100483  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:13:06.100675  123193 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:13:06.102613  123193 fix.go:112] recreateIfNeeded on ha-055395: state=Running err=<nil>
	W0826 11:13:06.102658  123193 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 11:13:06.104806  123193 out.go:177] * Updating the running kvm2 "ha-055395" VM ...
	I0826 11:13:06.106160  123193 machine.go:93] provisionDockerMachine start ...
	I0826 11:13:06.106192  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:13:06.106473  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.109432  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.109980  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.110009  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.110231  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.110457  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.110649  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.110792  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.111029  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:13:06.111281  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:13:06.111294  123193 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 11:13:06.224598  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395
	
	I0826 11:13:06.224636  123193 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:13:06.224952  123193 buildroot.go:166] provisioning hostname "ha-055395"
	I0826 11:13:06.224982  123193 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:13:06.225168  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.227866  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.228317  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.228351  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.228557  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.228791  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.228983  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.229119  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.229314  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:13:06.229485  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:13:06.229498  123193 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-055395 && echo "ha-055395" | sudo tee /etc/hostname
	I0826 11:13:06.362622  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395
	
	I0826 11:13:06.362660  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.365442  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.365874  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.365904  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.366107  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.366311  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.366482  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.366619  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.366793  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:13:06.366990  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:13:06.367007  123193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-055395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-055395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-055395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:13:06.475650  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:13:06.475683  123193 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:13:06.475706  123193 buildroot.go:174] setting up certificates
	I0826 11:13:06.475716  123193 provision.go:84] configureAuth start
	I0826 11:13:06.475729  123193 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:13:06.476021  123193 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:13:06.478775  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.479208  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.479237  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.479454  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.481720  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.482145  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.482168  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.482329  123193 provision.go:143] copyHostCerts
	I0826 11:13:06.482362  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:13:06.482401  123193 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:13:06.482420  123193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:13:06.482491  123193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:13:06.482565  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:13:06.482581  123193 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:13:06.482587  123193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:13:06.482609  123193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:13:06.482652  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:13:06.482669  123193 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:13:06.482675  123193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:13:06.482698  123193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:13:06.482743  123193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.ha-055395 san=[127.0.0.1 192.168.39.150 ha-055395 localhost minikube]
	I0826 11:13:06.542046  123193 provision.go:177] copyRemoteCerts
	I0826 11:13:06.542106  123193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:13:06.542129  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.545265  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.545674  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.545706  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.545933  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.546130  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.546253  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.546432  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:13:06.629241  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:13:06.629313  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:13:06.656782  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:13:06.656874  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0826 11:13:06.683314  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:13:06.683387  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 11:13:06.709769  123193 provision.go:87] duration metric: took 234.035583ms to configureAuth
	I0826 11:13:06.709807  123193 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:13:06.710064  123193 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:13:06.710139  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.712949  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.713376  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.713407  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.713614  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.713827  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.713977  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.714096  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.714240  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:13:06.714438  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:13:06.714463  123193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:14:37.499599  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:14:37.499635  123193 machine.go:96] duration metric: took 1m31.393448396s to provisionDockerMachine
	I0826 11:14:37.499648  123193 start.go:293] postStartSetup for "ha-055395" (driver="kvm2")
	I0826 11:14:37.499659  123193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:14:37.499676  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.500016  123193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:14:37.500052  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.503073  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.503484  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.503510  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.503697  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.503910  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.504095  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.504255  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:14:37.585617  123193 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:14:37.589707  123193 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:14:37.589728  123193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:14:37.589794  123193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:14:37.589883  123193 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:14:37.589898  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:14:37.590009  123193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:14:37.598989  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:14:37.622120  123193 start.go:296] duration metric: took 122.453834ms for postStartSetup
	I0826 11:14:37.622178  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.622494  123193 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0826 11:14:37.622521  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.625236  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.625681  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.625703  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.625888  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.626073  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.626248  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.626429  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	W0826 11:14:37.704862  123193 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0826 11:14:37.704892  123193 fix.go:56] duration metric: took 1m31.621907358s for fixHost
	I0826 11:14:37.704919  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.707720  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.708155  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.708183  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.708361  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.708634  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.708844  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.709011  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.709158  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:14:37.709332  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:14:37.709346  123193 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:14:37.811879  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670877.778682570
	
	I0826 11:14:37.811905  123193 fix.go:216] guest clock: 1724670877.778682570
	I0826 11:14:37.811916  123193 fix.go:229] Guest: 2024-08-26 11:14:37.77868257 +0000 UTC Remote: 2024-08-26 11:14:37.704904399 +0000 UTC m=+91.769863937 (delta=73.778171ms)
	I0826 11:14:37.811944  123193 fix.go:200] guest clock delta is within tolerance: 73.778171ms
	I0826 11:14:37.811952  123193 start.go:83] releasing machines lock for "ha-055395", held for 1m31.728983246s
	I0826 11:14:37.811977  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.812275  123193 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:14:37.814931  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.815336  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.815365  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.815609  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.816156  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.816374  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.816485  123193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:14:37.816545  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.816615  123193 ssh_runner.go:195] Run: cat /version.json
	I0826 11:14:37.816644  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.819278  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.819622  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.819646  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.819801  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.819832  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.820048  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.820191  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.820215  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.820217  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.820403  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.820467  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:14:37.820562  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.820734  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.820882  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:14:37.936654  123193 ssh_runner.go:195] Run: systemctl --version
	I0826 11:14:37.942875  123193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:14:38.100026  123193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:14:38.109666  123193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:14:38.109755  123193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:14:38.119060  123193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0826 11:14:38.119088  123193 start.go:495] detecting cgroup driver to use...
	I0826 11:14:38.119167  123193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:14:38.135424  123193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:14:38.149564  123193 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:14:38.149633  123193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:14:38.163506  123193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:14:38.177559  123193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:14:38.329098  123193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:14:38.484602  123193 docker.go:233] disabling docker service ...
	I0826 11:14:38.484704  123193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:14:38.503936  123193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:14:38.519093  123193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:14:38.690930  123193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:14:38.852272  123193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:14:38.867965  123193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:14:38.886491  123193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:14:38.886565  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.897487  123193 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:14:38.897554  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.908297  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.919147  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.929313  123193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:14:38.940137  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.950735  123193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.961394  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.972102  123193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:14:38.982223  123193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:14:38.992083  123193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:14:39.149744  123193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:14:41.775213  123193 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.625428001s)
	I0826 11:14:41.775252  123193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:14:41.775312  123193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:14:41.780065  123193 start.go:563] Will wait 60s for crictl version
	I0826 11:14:41.780139  123193 ssh_runner.go:195] Run: which crictl
	I0826 11:14:41.783817  123193 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:14:41.825379  123193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:14:41.825482  123193 ssh_runner.go:195] Run: crio --version
	I0826 11:14:41.854410  123193 ssh_runner.go:195] Run: crio --version
	I0826 11:14:41.886143  123193 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:14:41.887414  123193 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:14:41.890088  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:41.890460  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:41.890489  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:41.890699  123193 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:14:41.895155  123193 kubeadm.go:883] updating cluster {Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:14:41.895329  123193 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:14:41.895396  123193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:14:41.941359  123193 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:14:41.941392  123193 crio.go:433] Images already preloaded, skipping extraction
	I0826 11:14:41.941446  123193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:14:41.979061  123193 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:14:41.979093  123193 cache_images.go:84] Images are preloaded, skipping loading
	I0826 11:14:41.979107  123193 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.31.0 crio true true} ...
	I0826 11:14:41.979247  123193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-055395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:14:41.979335  123193 ssh_runner.go:195] Run: crio config
	I0826 11:14:42.035807  123193 cni.go:84] Creating CNI manager for ""
	I0826 11:14:42.035830  123193 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0826 11:14:42.035846  123193 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:14:42.035875  123193 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-055395 NodeName:ha-055395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 11:14:42.036062  123193 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-055395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:14:42.036085  123193 kube-vip.go:115] generating kube-vip config ...
	I0826 11:14:42.036139  123193 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0826 11:14:42.047736  123193 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0826 11:14:42.047869  123193 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0826 11:14:42.047934  123193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:14:42.058174  123193 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:14:42.058301  123193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0826 11:14:42.068035  123193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0826 11:14:42.084250  123193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:14:42.100649  123193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0826 11:14:42.117899  123193 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0826 11:14:42.136179  123193 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0826 11:14:42.140225  123193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:14:42.283952  123193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:14:42.298814  123193 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395 for IP: 192.168.39.150
	I0826 11:14:42.298860  123193 certs.go:194] generating shared ca certs ...
	I0826 11:14:42.298884  123193 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:14:42.299081  123193 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:14:42.299124  123193 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:14:42.299137  123193 certs.go:256] generating profile certs ...
	I0826 11:14:42.299215  123193 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key
	I0826 11:14:42.299246  123193 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.aed7e715
	I0826 11:14:42.299283  123193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.aed7e715 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.55 192.168.39.209 192.168.39.254]
	I0826 11:14:42.471744  123193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.aed7e715 ...
	I0826 11:14:42.471780  123193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.aed7e715: {Name:mk5497018f8a9b324095792b91b09a556316831e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:14:42.471994  123193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.aed7e715 ...
	I0826 11:14:42.472013  123193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.aed7e715: {Name:mkfba1a7079200f67ef713b5dcc30c2d61c3cfee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:14:42.472121  123193 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.aed7e715 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt
	I0826 11:14:42.472265  123193 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.aed7e715 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key
	I0826 11:14:42.472393  123193 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key
	I0826 11:14:42.472410  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:14:42.472424  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:14:42.472437  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:14:42.472449  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:14:42.472462  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:14:42.472474  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:14:42.472493  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:14:42.472505  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:14:42.472556  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:14:42.472593  123193 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:14:42.472602  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:14:42.472625  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:14:42.472646  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:14:42.472669  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:14:42.472705  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:14:42.472730  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:14:42.472743  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:14:42.472758  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:14:42.473309  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:14:42.498721  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:14:42.523079  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:14:42.546565  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:14:42.571407  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0826 11:14:42.595852  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 11:14:42.627257  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:14:42.653266  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:14:42.679000  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:14:42.702897  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:14:42.727009  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:14:42.750956  123193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:14:42.766630  123193 ssh_runner.go:195] Run: openssl version
	I0826 11:14:42.772257  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:14:42.782541  123193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:14:42.786909  123193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:14:42.786973  123193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:14:42.792540  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:14:42.801894  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:14:42.812704  123193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:14:42.816924  123193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:14:42.816982  123193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:14:42.822387  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:14:42.831896  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:14:42.843195  123193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:14:42.847811  123193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:14:42.847881  123193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:14:42.854063  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:14:42.864186  123193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:14:42.868556  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 11:14:42.874071  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 11:14:42.879702  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 11:14:42.885099  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 11:14:42.890770  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 11:14:42.896155  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 11:14:42.901577  123193 kubeadm.go:392] StartCluster: {Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:14:42.901719  123193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:14:42.901768  123193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:14:42.939101  123193 cri.go:89] found id: "f089083a0f12732599bd9007e4e46787aacb1806485e186d536fc6c3c5c88b4b"
	I0826 11:14:42.939128  123193 cri.go:89] found id: "a0d4d655ef65a314578371d034d4b81675c6c98786e609ba4282e0490966cae8"
	I0826 11:14:42.939132  123193 cri.go:89] found id: "ff3194e112f6dde16694850256b28235cc541fdd6c157c015335202884411715"
	I0826 11:14:42.939135  123193 cri.go:89] found id: "80c1b2c3d22b0215c4e6ce214890fd441801844dbfb230aabeb34c3ba312f453"
	I0826 11:14:42.939142  123193 cri.go:89] found id: "588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9"
	I0826 11:14:42.939146  123193 cri.go:89] found id: "9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e"
	I0826 11:14:42.939148  123193 cri.go:89] found id: "d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8"
	I0826 11:14:42.939151  123193 cri.go:89] found id: "4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235"
	I0826 11:14:42.939153  123193 cri.go:89] found id: "d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e"
	I0826 11:14:42.939159  123193 cri.go:89] found id: "9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3"
	I0826 11:14:42.939176  123193 cri.go:89] found id: "9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5"
	I0826 11:14:42.939182  123193 cri.go:89] found id: "bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b"
	I0826 11:14:42.939185  123193 cri.go:89] found id: "37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5"
	I0826 11:14:42.939201  123193 cri.go:89] found id: ""
	I0826 11:14:42.939257  123193 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.265099757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671057265067151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9f1c643-7fd2-4fee-a680-d2b616216d27 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.265829486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4dbfaf2-d59f-4b2f-8112-a2f71ea404d4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.265923482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4dbfaf2-d59f-4b2f-8112-a2f71ea404d4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.266604597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e247f0cb6ee28ce6b07d70c7a8c38830b7a09011c3e9849f693b1521d15d043,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724671023270144542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670955273014946,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670932268629858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724670931283856157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928140b20a95ff0c3119d0653636e9b851e522ab99b70b2e483eafc1ec700be0,PodSandboxId:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670922603124539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724670921735330432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d7c2151209bf1a63dbae6f97269ff3721a08ead39cd8000600f9b104db4aa5,PodSandboxId:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724670904253364525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe,PodSandboxId:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889497200122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87,PodSandboxId:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724670889408031650,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031,PodSandboxId:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724670889403115181,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40,PodSandboxId:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889279681833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2,PodSandboxId:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670889132453741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724670889191908273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b,PodSandboxId:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670889129638837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724670388552334096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252441050731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252404278458,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724670240453596212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724670236587424536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724670224882013795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724670224829122561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4dbfaf2-d59f-4b2f-8112-a2f71ea404d4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.321588738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18df3b62-4482-44ec-9cbf-621888d3463e name=/runtime.v1.RuntimeService/Version
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.321684230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18df3b62-4482-44ec-9cbf-621888d3463e name=/runtime.v1.RuntimeService/Version
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.323627672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5490224e-78e1-4598-84d6-4c45613f652f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.325085671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671057325053770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5490224e-78e1-4598-84d6-4c45613f652f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.325854947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17fb726f-f0f0-4fcb-a1eb-e8d55aad67d9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.325941266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17fb726f-f0f0-4fcb-a1eb-e8d55aad67d9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.326499548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e247f0cb6ee28ce6b07d70c7a8c38830b7a09011c3e9849f693b1521d15d043,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724671023270144542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670955273014946,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670932268629858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724670931283856157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928140b20a95ff0c3119d0653636e9b851e522ab99b70b2e483eafc1ec700be0,PodSandboxId:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670922603124539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724670921735330432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d7c2151209bf1a63dbae6f97269ff3721a08ead39cd8000600f9b104db4aa5,PodSandboxId:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724670904253364525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe,PodSandboxId:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889497200122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87,PodSandboxId:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724670889408031650,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031,PodSandboxId:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724670889403115181,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40,PodSandboxId:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889279681833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2,PodSandboxId:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670889132453741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724670889191908273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b,PodSandboxId:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670889129638837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724670388552334096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252441050731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252404278458,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724670240453596212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724670236587424536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724670224882013795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724670224829122561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17fb726f-f0f0-4fcb-a1eb-e8d55aad67d9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.371409449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=249c172d-ded1-487d-b3a5-fb34298eb27a name=/runtime.v1.RuntimeService/Version
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.371547458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=249c172d-ded1-487d-b3a5-fb34298eb27a name=/runtime.v1.RuntimeService/Version
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.372646053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e1d536c-eb6b-49cb-ac90-8af46fce080a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.373181916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671057373155501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e1d536c-eb6b-49cb-ac90-8af46fce080a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.373705120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d91ba76-f1c5-4bfe-8db1-8ff48e0a7561 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.373848250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d91ba76-f1c5-4bfe-8db1-8ff48e0a7561 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.374511670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e247f0cb6ee28ce6b07d70c7a8c38830b7a09011c3e9849f693b1521d15d043,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724671023270144542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670955273014946,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670932268629858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724670931283856157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928140b20a95ff0c3119d0653636e9b851e522ab99b70b2e483eafc1ec700be0,PodSandboxId:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670922603124539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724670921735330432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d7c2151209bf1a63dbae6f97269ff3721a08ead39cd8000600f9b104db4aa5,PodSandboxId:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724670904253364525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe,PodSandboxId:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889497200122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87,PodSandboxId:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724670889408031650,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031,PodSandboxId:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724670889403115181,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40,PodSandboxId:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889279681833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2,PodSandboxId:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670889132453741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724670889191908273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b,PodSandboxId:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670889129638837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724670388552334096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252441050731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252404278458,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724670240453596212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724670236587424536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724670224882013795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724670224829122561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d91ba76-f1c5-4bfe-8db1-8ff48e0a7561 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.419358964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cf6bc37-d2e3-4ab6-9c20-e74db9e609d8 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.419469468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cf6bc37-d2e3-4ab6-9c20-e74db9e609d8 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.420957355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7463bc8d-a74d-4bfa-afea-7bf50afc2d0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.421411196Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671057421384227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7463bc8d-a74d-4bfa-afea-7bf50afc2d0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.422081925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56204318-2e8d-4a32-a7b1-cddf847d5e0d name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.422190307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56204318-2e8d-4a32-a7b1-cddf847d5e0d name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:17:37 ha-055395 crio[3672]: time="2024-08-26 11:17:37.422616896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e247f0cb6ee28ce6b07d70c7a8c38830b7a09011c3e9849f693b1521d15d043,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724671023270144542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670955273014946,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670932268629858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724670931283856157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928140b20a95ff0c3119d0653636e9b851e522ab99b70b2e483eafc1ec700be0,PodSandboxId:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670922603124539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724670921735330432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d7c2151209bf1a63dbae6f97269ff3721a08ead39cd8000600f9b104db4aa5,PodSandboxId:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724670904253364525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe,PodSandboxId:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889497200122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87,PodSandboxId:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724670889408031650,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031,PodSandboxId:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724670889403115181,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40,PodSandboxId:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889279681833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2,PodSandboxId:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670889132453741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724670889191908273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b,PodSandboxId:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670889129638837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724670388552334096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252441050731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252404278458,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724670240453596212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724670236587424536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724670224882013795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724670224829122561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56204318-2e8d-4a32-a7b1-cddf847d5e0d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8e247f0cb6ee2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      34 seconds ago       Running             storage-provisioner       5                   5481856a84f01       storage-provisioner
	4735d890e73b4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   3                   7db9b4dfb41b5       kube-controller-manager-ha-055395
	f8b101352f735       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   a5997cca5dc2f       kube-apiserver-ha-055395
	9b429292e658d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   5481856a84f01       storage-provisioner
	928140b20a95f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   2b5049e4e5b5f       busybox-7dff88458-xh6vw
	8e71c83fab111       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   2                   7db9b4dfb41b5       kube-controller-manager-ha-055395
	f2d7c2151209b       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   a7119d5357186       kube-vip-ha-055395
	07dedbd1eb60f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   575df53facd27       coredns-6f6b679f8f-l9bd4
	1e9ddffb81c9f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   e70212335fe57       kube-proxy-g45pb
	79c290adde24b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   f6eebe19a373f       kindnet-z2rh2
	9e2b5b7689208       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   926669bae3cfe       coredns-6f6b679f8f-nxb7s
	938e88cf27c38       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   a5997cca5dc2f       kube-apiserver-ha-055395
	49b2d6a852b11       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   39d4eeb3baee1       kube-scheduler-ha-055395
	113af412b49ca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   d2b709b0b3cf0       etcd-ha-055395
	d2f106e1bd830       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   a356e619a2186       busybox-7dff88458-xh6vw
	588201165ca01       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   73e7528d83ce5       coredns-6f6b679f8f-nxb7s
	9fdad1c79bb41       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   3593c2f74b608       coredns-6f6b679f8f-l9bd4
	d5ffe25b55c8a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   3f092331272f7       kindnet-z2rh2
	4518376ec7b4a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   dd6c20478efce       kube-proxy-g45pb
	9f71e1964ec11       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Exited              kube-scheduler            0                   d03f237462672       kube-scheduler-ha-055395
	9500eb08ad452       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   40a84124456a3       etcd-ha-055395
	
	
	==> coredns [07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe] <==
	Trace[290535648]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (11:15:04.633)
	Trace[290535648]: [10.002169458s] [10.002169458s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9] <==
	[INFO] 10.244.0.4:49284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199959s
	[INFO] 10.244.3.2:38694 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112066s
	[INFO] 10.244.3.2:55559 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116423s
	[INFO] 10.244.1.2:38712 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000274813s
	[INFO] 10.244.1.2:38536 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091302s
	[INFO] 10.244.0.4:35805 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089054s
	[INFO] 10.244.0.4:53560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109072s
	[INFO] 10.244.0.4:50886 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061358s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1853&timeout=7m19s&timeoutSeconds=439&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1852&timeout=5m33s&timeoutSeconds=333&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1913791900]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (26-Aug-2024 11:14:58.020) (total time: 10000ms):
	Trace[1913791900]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:15:08.021)
	Trace[1913791900]: [10.000976505s] [10.000976505s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41626->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41626->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e] <==
	[INFO] 10.244.0.4:57644 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001630694s
	[INFO] 10.244.3.2:35262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118796s
	[INFO] 10.244.3.2:56831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004452512s
	[INFO] 10.244.3.2:50141 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195651s
	[INFO] 10.244.3.2:52724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157926s
	[INFO] 10.244.3.2:48168 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135307s
	[INFO] 10.244.1.2:49021 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099106s
	[INFO] 10.244.0.4:33653 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173931s
	[INFO] 10.244.0.4:49095 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089973s
	[INFO] 10.244.1.2:60072 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132366s
	[INFO] 10.244.1.2:45712 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081817s
	[INFO] 10.244.1.2:47110 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082159s
	[INFO] 10.244.0.4:48619 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100933s
	[INFO] 10.244.0.4:37358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069397s
	[INFO] 10.244.0.4:46981 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092796s
	[INFO] 10.244.3.2:59777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240921s
	[INFO] 10.244.3.2:44319 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002198s
	[INFO] 10.244.1.2:48438 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216864s
	[INFO] 10.244.1.2:45176 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133331s
	[INFO] 10.244.0.4:41108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112163s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-055395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_03_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:03:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:15:30 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:15:30 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:15:30 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:15:30 +0000   Mon, 26 Aug 2024 11:04:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-055395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 68841a7ef08f47a386553bd433710191
	  System UUID:                68841a7e-f08f-47a3-8655-3bd433710191
	  Boot ID:                    be93c222-ff08-41d5-baae-cb87ba3b44cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xh6vw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-l9bd4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-nxb7s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-055395                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-z2rh2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-055395             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-055395    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-g45pb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-055395             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-055395                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 2m3s                  kube-proxy       
	  Normal   Starting                 13m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                   kubelet          Node ha-055395 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                   kubelet          Node ha-055395 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                   kubelet          Node ha-055395 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                   node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal   NodeReady                13m                   kubelet          Node ha-055395 status is now: NodeReady
	  Normal   RegisteredNode           12m                   node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal   RegisteredNode           11m                   node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Warning  ContainerGCFailed        3m46s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m9s (x3 over 3m59s)  kubelet          Node ha-055395 status is now: NodeNotReady
	  Normal   RegisteredNode           2m8s                  node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal   RegisteredNode           99s                   node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal   RegisteredNode           39s                   node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	
	
	Name:               ha-055395-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_04_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:04:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:17:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:16:14 +0000   Mon, 26 Aug 2024 11:15:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:16:14 +0000   Mon, 26 Aug 2024 11:15:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:16:14 +0000   Mon, 26 Aug 2024 11:15:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:16:14 +0000   Mon, 26 Aug 2024 11:15:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ha-055395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9151de0e0e3545e983307f4ed75379a4
	  System UUID:                9151de0e-0e35-45e9-8330-7f4ed75379a4
	  Boot ID:                    50ba8f5e-7d65-475d-ad47-4d0ae2236d0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gbwm6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-055395-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-js2cb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-055395-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-055395-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zl5bm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-055395-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-055395-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-055395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-055395-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-055395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-055395-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  NodeNotReady             9m18s                  node-controller  Node ha-055395-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node ha-055395-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node ha-055395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m32s (x7 over 2m32s)  kubelet          Node ha-055395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           99s                    node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	
	
	Name:               ha-055395-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_05_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:05:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:17:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:17:13 +0000   Mon, 26 Aug 2024 11:16:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:17:13 +0000   Mon, 26 Aug 2024 11:16:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:17:13 +0000   Mon, 26 Aug 2024 11:16:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:17:13 +0000   Mon, 26 Aug 2024 11:16:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    ha-055395-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 85be43a1fb394f4ea22aa7e3674c88fc
	  System UUID:                85be43a1-fb39-4f4e-a22a-a7e3674c88fc
	  Boot ID:                    90068d8d-0144-4842-ab33-68f06e9c5e08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8cc92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-055395-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-wnz4m                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-055395-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-055395-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-52vmd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-055395-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-055395-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 38s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     11m                cidrAllocator    Node ha-055395-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-055395-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-055395-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-055395-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal   RegisteredNode           2m8s               node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	  Normal   NodeNotReady             88s                node-controller  Node ha-055395-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 55s                kubelet          Node ha-055395-m03 has been rebooted, boot id: 90068d8d-0144-4842-ab33-68f06e9c5e08
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-055395-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-055395-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-055395-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                55s                kubelet          Node ha-055395-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-055395-m03 event: Registered Node ha-055395-m03 in Controller
	
	
	Name:               ha-055395-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_07_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:07:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:17:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:17:29 +0000   Mon, 26 Aug 2024 11:17:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:17:29 +0000   Mon, 26 Aug 2024 11:17:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:17:29 +0000   Mon, 26 Aug 2024 11:17:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:17:29 +0000   Mon, 26 Aug 2024 11:17:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-055395-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fad8927c4194cf6a2bc5a5e286dfbd0
	  System UUID:                0fad8927-c419-4cf6-a2bc-5a5e286dfbd0
	  Boot ID:                    42cb7836-fe18-42d4-950b-8712451bd9c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-n4gpg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-758wf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-055395-m04 status is now: CIDRAssignmentFailed
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-055395-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-055395-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-055395-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-055395-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-055395-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m8s               node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   NodeNotReady             88s                node-controller  Node ha-055395-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                 kubelet          Node ha-055395-m04 has been rebooted, boot id: 42cb7836-fe18-42d4-950b-8712451bd9c6
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-055395-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-055395-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-055395-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                 kubelet          Node ha-055395-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.063641] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061452] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.165458] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.147926] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.278562] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.051395] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.884243] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.058746] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.395019] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.102683] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.458120] kauditd_printk_skb: 21 callbacks suppressed
	[Aug26 11:04] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.777933] kauditd_printk_skb: 24 callbacks suppressed
	[Aug26 11:11] kauditd_printk_skb: 1 callbacks suppressed
	[Aug26 11:14] systemd-fstab-generator[3593]: Ignoring "noauto" option for root device
	[  +0.146773] systemd-fstab-generator[3605]: Ignoring "noauto" option for root device
	[  +0.200928] systemd-fstab-generator[3619]: Ignoring "noauto" option for root device
	[  +0.172960] systemd-fstab-generator[3631]: Ignoring "noauto" option for root device
	[  +0.287553] systemd-fstab-generator[3659]: Ignoring "noauto" option for root device
	[  +3.147898] systemd-fstab-generator[3763]: Ignoring "noauto" option for root device
	[  +6.512708] kauditd_printk_skb: 122 callbacks suppressed
	[Aug26 11:15] kauditd_printk_skb: 87 callbacks suppressed
	[ +27.136017] kauditd_printk_skb: 5 callbacks suppressed
	[ +20.002082] kauditd_printk_skb: 8 callbacks suppressed
	[Aug26 11:16] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b] <==
	{"level":"warn","ts":"2024-08-26T11:16:37.935885Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.209:2380/version","remote-member-id":"ee6a5229deeda489","error":"Get \"https://192.168.39.209:2380/version\": dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:37.936026Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee6a5229deeda489","error":"Get \"https://192.168.39.209:2380/version\": dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:40.131553Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ee6a5229deeda489","rtt":"0s","error":"dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:40.134879Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ee6a5229deeda489","rtt":"0s","error":"dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:41.938597Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.209:2380/version","remote-member-id":"ee6a5229deeda489","error":"Get \"https://192.168.39.209:2380/version\": dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:41.938807Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee6a5229deeda489","error":"Get \"https://192.168.39.209:2380/version\": dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:45.132499Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ee6a5229deeda489","rtt":"0s","error":"dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:45.135212Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ee6a5229deeda489","rtt":"0s","error":"dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:45.940822Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.209:2380/version","remote-member-id":"ee6a5229deeda489","error":"Get \"https://192.168.39.209:2380/version\": dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:45.941065Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee6a5229deeda489","error":"Get \"https://192.168.39.209:2380/version\": dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:49.943065Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.209:2380/version","remote-member-id":"ee6a5229deeda489","error":"Get \"https://192.168.39.209:2380/version\": dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:49.943244Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee6a5229deeda489","error":"Get \"https://192.168.39.209:2380/version\": dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:50.132797Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ee6a5229deeda489","rtt":"0s","error":"dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-26T11:16:50.137838Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ee6a5229deeda489","rtt":"0s","error":"dial tcp 192.168.39.209:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-26T11:16:52.066338Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:16:52.066457Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:16:52.072194Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:16:52.088347Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2236e2deb63504cb","to":"ee6a5229deeda489","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-26T11:16:52.088460Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:16:52.098866Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2236e2deb63504cb","to":"ee6a5229deeda489","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-26T11:16:52.098990Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"warn","ts":"2024-08-26T11:16:52.122539Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.209:43384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-26T11:16:53.847064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.645908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:439"}
	{"level":"info","ts":"2024-08-26T11:16:53.847290Z","caller":"traceutil/trace.go:171","msg":"trace[162693067] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2418; }","duration":"106.896263ms","start":"2024-08-26T11:16:53.740366Z","end":"2024-08-26T11:16:53.847262Z","steps":["trace[162693067] 'range keys from in-memory index tree'  (duration: 105.406517ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:16:56.131628Z","caller":"traceutil/trace.go:171","msg":"trace[1764644837] transaction","detail":"{read_only:false; response_revision:2427; number_of_response:1; }","duration":"120.017838ms","start":"2024-08-26T11:16:56.011586Z","end":"2024-08-26T11:16:56.131604Z","steps":["trace[1764644837] 'process raft request'  (duration: 119.86995ms)"],"step_count":1}
	
	
	==> etcd [9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5] <==
	2024/08/26 11:13:06 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/26 11:13:06 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-26T11:13:06.898911Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.150:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T11:13:06.899101Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.150:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-26T11:13:06.899231Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"2236e2deb63504cb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-26T11:13:06.899442Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899542Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899648Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899817Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899920Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899988Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.900002Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.900008Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900019Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900042Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900161Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900211Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900257Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900279Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.902599Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"warn","ts":"2024-08-26T11:13:06.902616Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.98200491s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-26T11:13:06.902739Z","caller":"traceutil/trace.go:171","msg":"trace[855635748] range","detail":"{range_begin:; range_end:; }","duration":"8.982145437s","start":"2024-08-26T11:12:57.920585Z","end":"2024-08-26T11:13:06.902730Z","steps":["trace[855635748] 'agreement among raft nodes before linearized reading'  (duration: 8.982003038s)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:13:06.902859Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-08-26T11:13:06.902947Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-055395","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"]}
	{"level":"error","ts":"2024-08-26T11:13:06.902848Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 11:17:38 up 14 min,  0 users,  load average: 0.42, 0.53, 0.30
	Linux ha-055395 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031] <==
	I0826 11:17:00.635226       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:17:10.636571       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:17:10.636711       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:17:10.636923       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:17:10.636957       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:17:10.637089       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:17:10.637119       1 main.go:299] handling current node
	I0826 11:17:10.637148       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:17:10.637164       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:17:20.638002       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:17:20.638058       1 main.go:299] handling current node
	I0826 11:17:20.638084       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:17:20.638091       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:17:20.638324       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:17:20.638357       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:17:20.638445       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:17:20.638472       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:17:30.634213       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:17:30.634263       1 main.go:299] handling current node
	I0826 11:17:30.634281       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:17:30.634290       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:17:30.634457       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:17:30.634486       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:17:30.634549       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:17:30.634554       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8] <==
	I0826 11:12:31.525192       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:12:41.524352       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:12:41.524497       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:12:41.524736       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:12:41.525338       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:12:41.525488       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:12:41.525535       1 main.go:299] handling current node
	I0826 11:12:41.525562       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:12:41.525580       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:12:51.529853       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:12:51.529913       1 main.go:299] handling current node
	I0826 11:12:51.529937       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:12:51.529944       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:12:51.530251       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:12:51.530292       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:12:51.530420       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:12:51.530442       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:13:01.524395       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:13:01.524517       1 main.go:299] handling current node
	I0826 11:13:01.524551       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:13:01.524613       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:13:01.524876       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:13:01.525001       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:13:01.525218       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:13:01.525269       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d] <==
	I0826 11:14:49.999068       1 options.go:228] external host was not specified, using 192.168.39.150
	I0826 11:14:50.001120       1 server.go:142] Version: v1.31.0
	I0826 11:14:50.001169       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:14:50.471885       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0826 11:14:50.487844       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 11:14:50.493690       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0826 11:14:50.493814       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0826 11:14:50.494136       1 instance.go:232] Using reconciler: lease
	W0826 11:15:10.471352       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0826 11:15:10.471353       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0826 11:15:10.495265       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0826 11:15:10.495407       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b] <==
	I0826 11:15:34.598342       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0826 11:15:34.599418       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0826 11:15:34.684318       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0826 11:15:34.684360       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0826 11:15:34.684941       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0826 11:15:34.685375       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0826 11:15:34.688465       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0826 11:15:34.688587       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 11:15:34.689120       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0826 11:15:34.689202       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 11:15:34.689229       1 policy_source.go:224] refreshing policies
	I0826 11:15:34.712645       1 shared_informer.go:320] Caches are synced for configmaps
	I0826 11:15:34.723478       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0826 11:15:34.731022       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0826 11:15:34.731619       1 aggregator.go:171] initial CRD sync complete...
	I0826 11:15:34.731713       1 autoregister_controller.go:144] Starting autoregister controller
	I0826 11:15:34.731778       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0826 11:15:34.731811       1 cache.go:39] Caches are synced for autoregister controller
	I0826 11:15:34.741937       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0826 11:15:34.770656       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.55]
	I0826 11:15:34.772664       1 controller.go:615] quota admission added evaluator for: endpoints
	I0826 11:15:34.785266       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0826 11:15:34.789149       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0826 11:15:35.605372       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0826 11:15:36.109940       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.55]
	
	
	==> kube-controller-manager [4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b] <==
	I0826 11:16:09.379073       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-055395-m04"
	I0826 11:16:09.379604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m03"
	I0826 11:16:09.386953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:16:09.422701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:16:09.424161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m03"
	I0826 11:16:09.521105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.941143ms"
	I0826 11:16:09.521280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.14µs"
	I0826 11:16:13.148932       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m03"
	I0826 11:16:14.648198       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m03"
	I0826 11:16:14.731289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m02"
	I0826 11:16:23.230675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:16:24.730339       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:16:42.491434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m03"
	I0826 11:16:42.508246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m03"
	I0826 11:16:43.054718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m03"
	I0826 11:16:43.436383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="109.167µs"
	I0826 11:16:58.692253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:16:58.794374       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:17:00.995722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.874327ms"
	I0826 11:17:00.995891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.976µs"
	I0826 11:17:13.406403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m03"
	I0826 11:17:29.440731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-055395-m04"
	I0826 11:17:29.440856       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:17:29.461177       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:17:29.627077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	
	
	==> kube-controller-manager [8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00] <==
	I0826 11:15:22.404365       1 serving.go:386] Generated self-signed cert in-memory
	I0826 11:15:22.698175       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0826 11:15:22.698258       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:15:22.699693       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0826 11:15:22.699853       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0826 11:15:22.699857       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0826 11:15:22.699989       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0826 11:15:34.736402       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 11:14:51.837225       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0826 11:14:54.911378       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0826 11:14:57.982633       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0826 11:15:04.127506       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0826 11:15:16.414091       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0826 11:15:33.785700       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0826 11:15:33.786643       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 11:15:33.860650       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 11:15:33.860712       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 11:15:33.860880       1 server_linux.go:169] "Using iptables Proxier"
	I0826 11:15:33.875393       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 11:15:33.876061       1 server.go:483] "Version info" version="v1.31.0"
	I0826 11:15:33.876446       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:15:33.883254       1 config.go:197] "Starting service config controller"
	I0826 11:15:33.883482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 11:15:33.883642       1 config.go:104] "Starting endpoint slice config controller"
	I0826 11:15:33.883841       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 11:15:33.888649       1 config.go:326] "Starting node config controller"
	I0826 11:15:33.888818       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 11:15:33.984595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 11:15:33.984692       1 shared_informer.go:320] Caches are synced for service config
	I0826 11:15:33.990830       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235] <==
	E0826 11:11:59.808594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:02.879327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:02.879393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:02.879474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:02.879506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:02.879562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:02.879597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:09.022212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:09.022340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:09.022454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:09.022484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:12.094560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:12.094739       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:18.240312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:18.240364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:21.309241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:21.309300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:27.453984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:27.454059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:33.598109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:33.598249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:36.669512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:36.669591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:42.814640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:42.815109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2] <==
	W0826 11:15:28.849520       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.150:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:28.849636       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.150:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:28.909835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.150:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:28.910452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.150:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:29.489160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.150:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:29.489280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.150:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:29.785021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.150:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:29.785086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.150:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:29.816022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.150:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:29.816116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.150:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:30.959850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.150:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:30.959904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.150:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:31.075516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.150:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:31.075641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.150:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:31.122250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.150:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:31.122336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.150:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:31.158329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.150:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:31.158455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.150:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:31.463032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.150:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:31.463076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.150:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:34.607595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 11:15:34.607718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 11:15:34.607813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 11:15:34.607846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0826 11:15:52.607734       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3] <==
	E0826 11:07:03.708838       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kkwxm\": pod kube-proxy-kkwxm is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kkwxm" node="ha-055395-m04"
	E0826 11:07:03.708887       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2ef2b044-3278-43d7-8164-a8b51d7f9424(kube-system/kube-proxy-kkwxm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-kkwxm"
	E0826 11:07:03.708901       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kkwxm\": pod kube-proxy-kkwxm is already assigned to node \"ha-055395-m04\"" pod="kube-system/kube-proxy-kkwxm"
	I0826 11:07:03.708919       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kkwxm" node="ha-055395-m04"
	E0826 11:07:03.709603       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ww4xl\": pod kindnet-ww4xl is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ww4xl" node="ha-055395-m04"
	E0826 11:07:03.711019       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 45edff34-de36-493a-9dba-b74e8a326787(kube-system/kindnet-ww4xl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ww4xl"
	E0826 11:07:03.711136       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ww4xl\": pod kindnet-ww4xl is already assigned to node \"ha-055395-m04\"" pod="kube-system/kindnet-ww4xl"
	I0826 11:07:03.711360       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ww4xl" node="ha-055395-m04"
	E0826 11:12:57.561998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0826 11:12:57.601152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0826 11:12:57.615901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0826 11:12:59.013332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0826 11:12:59.065274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0826 11:13:00.486614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0826 11:13:00.895670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0826 11:13:01.774544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0826 11:13:02.289505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0826 11:13:03.749963       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0826 11:13:04.910651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0826 11:13:05.980267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0826 11:13:06.778269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	I0826 11:13:06.818673       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0826 11:13:06.819011       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0826 11:13:06.819229       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0826 11:13:06.819577       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 26 11:16:36 ha-055395 kubelet[1329]: I0826 11:16:36.256917    1329 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-055395" podUID="72a93d75-67e0-4605-81c3-f1ed830fd5eb"
	Aug 26 11:16:36 ha-055395 kubelet[1329]: I0826 11:16:36.286906    1329 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-055395"
	Aug 26 11:16:41 ha-055395 kubelet[1329]: I0826 11:16:41.257811    1329 scope.go:117] "RemoveContainer" containerID="9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525"
	Aug 26 11:16:41 ha-055395 kubelet[1329]: E0826 11:16:41.258035    1329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5bf3fea9-2562-4769-944b-72472da24419)\"" pod="kube-system/storage-provisioner" podUID="5bf3fea9-2562-4769-944b-72472da24419"
	Aug 26 11:16:41 ha-055395 kubelet[1329]: I0826 11:16:41.276597    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-055395" podStartSLOduration=5.276562692 podStartE2EDuration="5.276562692s" podCreationTimestamp="2024-08-26 11:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-26 11:16:41.276190795 +0000 UTC m=+770.210927813" watchObservedRunningTime="2024-08-26 11:16:41.276562692 +0000 UTC m=+770.211299718"
	Aug 26 11:16:41 ha-055395 kubelet[1329]: E0826 11:16:41.475555    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671001475015137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:16:41 ha-055395 kubelet[1329]: E0826 11:16:41.475594    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671001475015137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:16:51 ha-055395 kubelet[1329]: E0826 11:16:51.275825    1329 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 11:16:51 ha-055395 kubelet[1329]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:16:51 ha-055395 kubelet[1329]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:16:51 ha-055395 kubelet[1329]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:16:51 ha-055395 kubelet[1329]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:16:51 ha-055395 kubelet[1329]: E0826 11:16:51.482079    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671011478204981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:16:51 ha-055395 kubelet[1329]: E0826 11:16:51.482133    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671011478204981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:16:52 ha-055395 kubelet[1329]: I0826 11:16:52.256713    1329 scope.go:117] "RemoveContainer" containerID="9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525"
	Aug 26 11:16:52 ha-055395 kubelet[1329]: E0826 11:16:52.257218    1329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5bf3fea9-2562-4769-944b-72472da24419)\"" pod="kube-system/storage-provisioner" podUID="5bf3fea9-2562-4769-944b-72472da24419"
	Aug 26 11:17:01 ha-055395 kubelet[1329]: E0826 11:17:01.487715    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671021485916173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:17:01 ha-055395 kubelet[1329]: E0826 11:17:01.488083    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671021485916173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:17:03 ha-055395 kubelet[1329]: I0826 11:17:03.256353    1329 scope.go:117] "RemoveContainer" containerID="9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525"
	Aug 26 11:17:11 ha-055395 kubelet[1329]: E0826 11:17:11.492651    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671031492271382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:17:11 ha-055395 kubelet[1329]: E0826 11:17:11.492710    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671031492271382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:17:21 ha-055395 kubelet[1329]: E0826 11:17:21.495103    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671041494272345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:17:21 ha-055395 kubelet[1329]: E0826 11:17:21.495590    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671041494272345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:17:31 ha-055395 kubelet[1329]: E0826 11:17:31.499646    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671051498421952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:17:31 ha-055395 kubelet[1329]: E0826 11:17:31.500119    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671051498421952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 11:17:36.921002  124631 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19501-99403/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-055395 -n ha-055395
helpers_test.go:261: (dbg) Run:  kubectl --context ha-055395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (395.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 stop -v=7 --alsologtostderr
E0826 11:19:34.326882  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 stop -v=7 --alsologtostderr: exit status 82 (2m0.512883782s)

                                                
                                                
-- stdout --
	* Stopping node "ha-055395-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:17:56.088213  125443 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:17:56.089589  125443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:17:56.089611  125443 out.go:358] Setting ErrFile to fd 2...
	I0826 11:17:56.089618  125443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:17:56.090118  125443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:17:56.090459  125443 out.go:352] Setting JSON to false
	I0826 11:17:56.090591  125443 mustload.go:65] Loading cluster: ha-055395
	I0826 11:17:56.091028  125443 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:17:56.091122  125443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:17:56.091296  125443 mustload.go:65] Loading cluster: ha-055395
	I0826 11:17:56.091450  125443 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:17:56.091497  125443 stop.go:39] StopHost: ha-055395-m04
	I0826 11:17:56.091932  125443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:17:56.091979  125443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:17:56.111537  125443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0826 11:17:56.112208  125443 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:17:56.112841  125443 main.go:141] libmachine: Using API Version  1
	I0826 11:17:56.112871  125443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:17:56.113284  125443 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:17:56.116155  125443 out.go:177] * Stopping node "ha-055395-m04"  ...
	I0826 11:17:56.117526  125443 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0826 11:17:56.117587  125443 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:17:56.117941  125443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0826 11:17:56.117973  125443 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:17:56.123417  125443 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:17:56.124018  125443 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:17:22 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:17:56.124042  125443 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:17:56.124376  125443 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:17:56.124650  125443 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:17:56.124860  125443 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:17:56.125068  125443 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	I0826 11:17:56.216980  125443 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0826 11:17:56.271235  125443 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0826 11:17:56.325938  125443 main.go:141] libmachine: Stopping "ha-055395-m04"...
	I0826 11:17:56.325967  125443 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:17:56.328054  125443 main.go:141] libmachine: (ha-055395-m04) Calling .Stop
	I0826 11:17:56.332279  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 0/120
	I0826 11:17:57.333938  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 1/120
	I0826 11:17:58.335375  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 2/120
	I0826 11:17:59.337036  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 3/120
	I0826 11:18:00.339222  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 4/120
	I0826 11:18:01.341547  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 5/120
	I0826 11:18:02.343008  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 6/120
	I0826 11:18:03.344512  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 7/120
	I0826 11:18:04.346039  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 8/120
	I0826 11:18:05.347761  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 9/120
	I0826 11:18:06.349342  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 10/120
	I0826 11:18:07.351082  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 11/120
	I0826 11:18:08.353492  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 12/120
	I0826 11:18:09.355333  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 13/120
	I0826 11:18:10.357605  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 14/120
	I0826 11:18:11.360187  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 15/120
	I0826 11:18:12.362224  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 16/120
	I0826 11:18:13.363881  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 17/120
	I0826 11:18:14.365411  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 18/120
	I0826 11:18:15.367088  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 19/120
	I0826 11:18:16.369516  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 20/120
	I0826 11:18:17.371125  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 21/120
	I0826 11:18:18.373473  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 22/120
	I0826 11:18:19.375057  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 23/120
	I0826 11:18:20.377755  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 24/120
	I0826 11:18:21.380273  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 25/120
	I0826 11:18:22.382434  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 26/120
	I0826 11:18:23.384252  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 27/120
	I0826 11:18:24.385833  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 28/120
	I0826 11:18:25.387598  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 29/120
	I0826 11:18:26.390183  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 30/120
	I0826 11:18:27.391638  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 31/120
	I0826 11:18:28.393139  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 32/120
	I0826 11:18:29.394776  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 33/120
	I0826 11:18:30.396445  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 34/120
	I0826 11:18:31.398903  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 35/120
	I0826 11:18:32.400528  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 36/120
	I0826 11:18:33.402089  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 37/120
	I0826 11:18:34.403729  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 38/120
	I0826 11:18:35.405666  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 39/120
	I0826 11:18:36.408307  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 40/120
	I0826 11:18:37.409553  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 41/120
	I0826 11:18:38.410911  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 42/120
	I0826 11:18:39.412204  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 43/120
	I0826 11:18:40.413901  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 44/120
	I0826 11:18:41.416393  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 45/120
	I0826 11:18:42.418144  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 46/120
	I0826 11:18:43.419635  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 47/120
	I0826 11:18:44.421186  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 48/120
	I0826 11:18:45.422643  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 49/120
	I0826 11:18:46.424960  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 50/120
	I0826 11:18:47.426335  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 51/120
	I0826 11:18:48.428428  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 52/120
	I0826 11:18:49.429900  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 53/120
	I0826 11:18:50.431805  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 54/120
	I0826 11:18:51.433808  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 55/120
	I0826 11:18:52.435288  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 56/120
	I0826 11:18:53.436851  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 57/120
	I0826 11:18:54.438124  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 58/120
	I0826 11:18:55.439633  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 59/120
	I0826 11:18:56.441838  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 60/120
	I0826 11:18:57.443205  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 61/120
	I0826 11:18:58.444784  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 62/120
	I0826 11:18:59.446245  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 63/120
	I0826 11:19:00.447983  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 64/120
	I0826 11:19:01.449733  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 65/120
	I0826 11:19:02.451598  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 66/120
	I0826 11:19:03.453557  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 67/120
	I0826 11:19:04.454975  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 68/120
	I0826 11:19:05.456412  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 69/120
	I0826 11:19:06.458229  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 70/120
	I0826 11:19:07.459964  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 71/120
	I0826 11:19:08.461663  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 72/120
	I0826 11:19:09.463097  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 73/120
	I0826 11:19:10.465564  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 74/120
	I0826 11:19:11.467822  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 75/120
	I0826 11:19:12.469427  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 76/120
	I0826 11:19:13.471241  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 77/120
	I0826 11:19:14.472670  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 78/120
	I0826 11:19:15.474114  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 79/120
	I0826 11:19:16.476595  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 80/120
	I0826 11:19:17.478214  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 81/120
	I0826 11:19:18.480136  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 82/120
	I0826 11:19:19.482616  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 83/120
	I0826 11:19:20.484306  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 84/120
	I0826 11:19:21.486609  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 85/120
	I0826 11:19:22.488254  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 86/120
	I0826 11:19:23.489535  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 87/120
	I0826 11:19:24.491189  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 88/120
	I0826 11:19:25.492586  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 89/120
	I0826 11:19:26.495297  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 90/120
	I0826 11:19:27.496902  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 91/120
	I0826 11:19:28.498419  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 92/120
	I0826 11:19:29.500604  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 93/120
	I0826 11:19:30.501833  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 94/120
	I0826 11:19:31.503791  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 95/120
	I0826 11:19:32.505309  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 96/120
	I0826 11:19:33.506682  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 97/120
	I0826 11:19:34.508039  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 98/120
	I0826 11:19:35.509404  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 99/120
	I0826 11:19:36.511963  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 100/120
	I0826 11:19:37.513300  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 101/120
	I0826 11:19:38.514591  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 102/120
	I0826 11:19:39.515919  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 103/120
	I0826 11:19:40.517370  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 104/120
	I0826 11:19:41.519563  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 105/120
	I0826 11:19:42.521023  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 106/120
	I0826 11:19:43.522391  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 107/120
	I0826 11:19:44.523780  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 108/120
	I0826 11:19:45.525770  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 109/120
	I0826 11:19:46.527603  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 110/120
	I0826 11:19:47.529497  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 111/120
	I0826 11:19:48.531132  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 112/120
	I0826 11:19:49.532904  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 113/120
	I0826 11:19:50.534385  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 114/120
	I0826 11:19:51.536435  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 115/120
	I0826 11:19:52.537787  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 116/120
	I0826 11:19:53.539388  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 117/120
	I0826 11:19:54.541612  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 118/120
	I0826 11:19:55.543537  125443 main.go:141] libmachine: (ha-055395-m04) Waiting for machine to stop 119/120
	I0826 11:19:56.544108  125443 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0826 11:19:56.544177  125443 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0826 11:19:56.546282  125443 out.go:201] 
	W0826 11:19:56.547854  125443 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0826 11:19:56.547878  125443 out.go:270] * 
	* 
	W0826 11:19:56.550299  125443 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 11:19:56.551769  125443 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-055395 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr: exit status 3 (18.982716709s)

                                                
                                                
-- stdout --
	ha-055395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055395-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:19:56.602203  125890 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:19:56.602513  125890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:19:56.602525  125890 out.go:358] Setting ErrFile to fd 2...
	I0826 11:19:56.602529  125890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:19:56.602742  125890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:19:56.602989  125890 out.go:352] Setting JSON to false
	I0826 11:19:56.603022  125890 mustload.go:65] Loading cluster: ha-055395
	I0826 11:19:56.603137  125890 notify.go:220] Checking for updates...
	I0826 11:19:56.603496  125890 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:19:56.603518  125890 status.go:255] checking status of ha-055395 ...
	I0826 11:19:56.604002  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:56.604075  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:56.627508  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0826 11:19:56.628180  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:56.628796  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:56.628818  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:56.629224  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:56.629465  125890 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:19:56.631611  125890 status.go:330] ha-055395 host status = "Running" (err=<nil>)
	I0826 11:19:56.631632  125890 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:19:56.631949  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:56.632002  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:56.648786  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45925
	I0826 11:19:56.649299  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:56.649895  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:56.649924  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:56.650237  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:56.650424  125890 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:19:56.653676  125890 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:19:56.654146  125890 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:19:56.654181  125890 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:19:56.654333  125890 host.go:66] Checking if "ha-055395" exists ...
	I0826 11:19:56.654994  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:56.655050  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:56.672623  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0826 11:19:56.673050  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:56.673573  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:56.673648  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:56.674040  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:56.674296  125890 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:19:56.674559  125890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:19:56.674613  125890 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:19:56.677981  125890 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:19:56.678455  125890 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:19:56.678491  125890 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:19:56.678615  125890 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:19:56.678802  125890 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:19:56.678978  125890 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:19:56.679130  125890 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:19:56.764666  125890 ssh_runner.go:195] Run: systemctl --version
	I0826 11:19:56.770737  125890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:19:56.788450  125890 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:19:56.788513  125890 api_server.go:166] Checking apiserver status ...
	I0826 11:19:56.788559  125890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:19:56.810700  125890 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4987/cgroup
	W0826 11:19:56.821601  125890 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4987/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:19:56.821698  125890 ssh_runner.go:195] Run: ls
	I0826 11:19:56.828039  125890 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:19:56.835304  125890 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:19:56.835337  125890 status.go:422] ha-055395 apiserver status = Running (err=<nil>)
	I0826 11:19:56.835413  125890 status.go:257] ha-055395 status: &{Name:ha-055395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:19:56.835468  125890 status.go:255] checking status of ha-055395-m02 ...
	I0826 11:19:56.835801  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:56.835841  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:56.853231  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0826 11:19:56.853710  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:56.854150  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:56.854176  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:56.854531  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:56.854716  125890 main.go:141] libmachine: (ha-055395-m02) Calling .GetState
	I0826 11:19:56.856361  125890 status.go:330] ha-055395-m02 host status = "Running" (err=<nil>)
	I0826 11:19:56.856377  125890 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:19:56.856648  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:56.856689  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:56.872539  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44319
	I0826 11:19:56.873067  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:56.873821  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:56.873854  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:56.874189  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:56.874424  125890 main.go:141] libmachine: (ha-055395-m02) Calling .GetIP
	I0826 11:19:56.877717  125890 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:19:56.878358  125890 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:14:53 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:19:56.878389  125890 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:19:56.878533  125890 host.go:66] Checking if "ha-055395-m02" exists ...
	I0826 11:19:56.878899  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:56.878950  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:56.894780  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39219
	I0826 11:19:56.895361  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:56.895977  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:56.896009  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:56.896309  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:56.896508  125890 main.go:141] libmachine: (ha-055395-m02) Calling .DriverName
	I0826 11:19:56.896731  125890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:19:56.896760  125890 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHHostname
	I0826 11:19:56.900309  125890 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:19:56.900784  125890 main.go:141] libmachine: (ha-055395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:d6:56", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:14:53 +0000 UTC Type:0 Mac:52:54:00:5f:d6:56 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-055395-m02 Clientid:01:52:54:00:5f:d6:56}
	I0826 11:19:56.900813  125890 main.go:141] libmachine: (ha-055395-m02) DBG | domain ha-055395-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:5f:d6:56 in network mk-ha-055395
	I0826 11:19:56.901085  125890 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHPort
	I0826 11:19:56.901273  125890 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHKeyPath
	I0826 11:19:56.901460  125890 main.go:141] libmachine: (ha-055395-m02) Calling .GetSSHUsername
	I0826 11:19:56.901665  125890 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m02/id_rsa Username:docker}
	I0826 11:19:56.997305  125890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:19:57.018329  125890 kubeconfig.go:125] found "ha-055395" server: "https://192.168.39.254:8443"
	I0826 11:19:57.018359  125890 api_server.go:166] Checking apiserver status ...
	I0826 11:19:57.018393  125890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:19:57.034568  125890 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1418/cgroup
	W0826 11:19:57.044884  125890 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1418/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:19:57.044952  125890 ssh_runner.go:195] Run: ls
	I0826 11:19:57.049275  125890 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0826 11:19:57.053406  125890 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0826 11:19:57.053441  125890 status.go:422] ha-055395-m02 apiserver status = Running (err=<nil>)
	I0826 11:19:57.053453  125890 status.go:257] ha-055395-m02 status: &{Name:ha-055395-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:19:57.053471  125890 status.go:255] checking status of ha-055395-m04 ...
	I0826 11:19:57.053868  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:57.053915  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:57.069802  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0826 11:19:57.070289  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:57.070874  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:57.070906  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:57.071267  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:57.071518  125890 main.go:141] libmachine: (ha-055395-m04) Calling .GetState
	I0826 11:19:57.073427  125890 status.go:330] ha-055395-m04 host status = "Running" (err=<nil>)
	I0826 11:19:57.073447  125890 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:19:57.073753  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:57.073801  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:57.091508  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41407
	I0826 11:19:57.092026  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:57.092530  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:57.092554  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:57.092889  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:57.093060  125890 main.go:141] libmachine: (ha-055395-m04) Calling .GetIP
	I0826 11:19:57.095687  125890 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:19:57.096112  125890 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:17:22 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:19:57.096141  125890 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:19:57.096342  125890 host.go:66] Checking if "ha-055395-m04" exists ...
	I0826 11:19:57.096651  125890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:19:57.096692  125890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:19:57.112305  125890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0826 11:19:57.112797  125890 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:19:57.113385  125890 main.go:141] libmachine: Using API Version  1
	I0826 11:19:57.113420  125890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:19:57.113793  125890 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:19:57.114013  125890 main.go:141] libmachine: (ha-055395-m04) Calling .DriverName
	I0826 11:19:57.114242  125890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:19:57.114264  125890 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHHostname
	I0826 11:19:57.117273  125890 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:19:57.117716  125890 main.go:141] libmachine: (ha-055395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:1f:f6", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:17:22 +0000 UTC Type:0 Mac:52:54:00:72:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-055395-m04 Clientid:01:52:54:00:72:1f:f6}
	I0826 11:19:57.117749  125890 main.go:141] libmachine: (ha-055395-m04) DBG | domain ha-055395-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:72:1f:f6 in network mk-ha-055395
	I0826 11:19:57.117872  125890 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHPort
	I0826 11:19:57.118068  125890 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHKeyPath
	I0826 11:19:57.118218  125890 main.go:141] libmachine: (ha-055395-m04) Calling .GetSSHUsername
	I0826 11:19:57.118366  125890 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395-m04/id_rsa Username:docker}
	W0826 11:20:15.535075  125890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.185:22: connect: no route to host
	W0826 11:20:15.535174  125890 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	E0826 11:20:15.535191  125890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	I0826 11:20:15.535200  125890 status.go:257] ha-055395-m04 status: &{Name:ha-055395-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0826 11:20:15.535226  125890 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-055395 -n ha-055395
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-055395 logs -n 25: (1.707890962s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-055395 ssh -n ha-055395-m02 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04:/home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m04 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp testdata/cp-test.txt                                                | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395:/home/docker/cp-test_ha-055395-m04_ha-055395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395 sudo cat                                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m02:/home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m02 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m03:/home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n                                                                 | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | ha-055395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-055395 ssh -n ha-055395-m03 sudo cat                                          | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC | 26 Aug 24 11:07 UTC |
	|         | /home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-055395 node stop m02 -v=7                                                     | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-055395 node start m02 -v=7                                                    | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-055395 -v=7                                                           | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-055395 -v=7                                                                | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-055395 --wait=true -v=7                                                    | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:13 UTC | 26 Aug 24 11:17 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-055395                                                                | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:17 UTC |                     |
	| node    | ha-055395 node delete m03 -v=7                                                   | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:17 UTC | 26 Aug 24 11:17 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-055395 stop -v=7                                                              | ha-055395 | jenkins | v1.33.1 | 26 Aug 24 11:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 11:13:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 11:13:05.972526  123193 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:13:05.972664  123193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:13:05.972674  123193 out.go:358] Setting ErrFile to fd 2...
	I0826 11:13:05.972678  123193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:13:05.972905  123193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:13:05.973505  123193 out.go:352] Setting JSON to false
	I0826 11:13:05.974451  123193 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3327,"bootTime":1724667459,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:13:05.974518  123193 start.go:139] virtualization: kvm guest
	I0826 11:13:05.980809  123193 out.go:177] * [ha-055395] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:13:05.986500  123193 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:13:05.986504  123193 notify.go:220] Checking for updates...
	I0826 11:13:05.989822  123193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:13:05.991398  123193 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:13:05.992722  123193 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:13:05.994201  123193 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:13:05.995819  123193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:13:05.997945  123193 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:13:05.998078  123193 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:13:05.998723  123193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:13:05.998819  123193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:13:06.015878  123193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0826 11:13:06.016561  123193 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:13:06.017181  123193 main.go:141] libmachine: Using API Version  1
	I0826 11:13:06.017208  123193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:13:06.017647  123193 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:13:06.017850  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:13:06.059222  123193 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 11:13:06.060846  123193 start.go:297] selected driver: kvm2
	I0826 11:13:06.060871  123193 start.go:901] validating driver "kvm2" against &{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:13:06.061034  123193 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:13:06.061389  123193 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:13:06.061486  123193 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:13:06.077452  123193 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:13:06.078262  123193 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:13:06.078324  123193 cni.go:84] Creating CNI manager for ""
	I0826 11:13:06.078336  123193 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0826 11:13:06.078394  123193 start.go:340] cluster config:
	{Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:13:06.078577  123193 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:13:06.080853  123193 out.go:177] * Starting "ha-055395" primary control-plane node in "ha-055395" cluster
	I0826 11:13:06.082350  123193 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:13:06.082396  123193 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:13:06.082409  123193 cache.go:56] Caching tarball of preloaded images
	I0826 11:13:06.082515  123193 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:13:06.082526  123193 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:13:06.082658  123193 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/config.json ...
	I0826 11:13:06.082904  123193 start.go:360] acquireMachinesLock for ha-055395: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:13:06.082956  123193 start.go:364] duration metric: took 30.732µs to acquireMachinesLock for "ha-055395"
	I0826 11:13:06.082977  123193 start.go:96] Skipping create...Using existing machine configuration
	I0826 11:13:06.082985  123193 fix.go:54] fixHost starting: 
	I0826 11:13:06.083232  123193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:13:06.083271  123193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:13:06.098695  123193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0826 11:13:06.099178  123193 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:13:06.099767  123193 main.go:141] libmachine: Using API Version  1
	I0826 11:13:06.099815  123193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:13:06.100245  123193 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:13:06.100483  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:13:06.100675  123193 main.go:141] libmachine: (ha-055395) Calling .GetState
	I0826 11:13:06.102613  123193 fix.go:112] recreateIfNeeded on ha-055395: state=Running err=<nil>
	W0826 11:13:06.102658  123193 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 11:13:06.104806  123193 out.go:177] * Updating the running kvm2 "ha-055395" VM ...
	I0826 11:13:06.106160  123193 machine.go:93] provisionDockerMachine start ...
	I0826 11:13:06.106192  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:13:06.106473  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.109432  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.109980  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.110009  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.110231  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.110457  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.110649  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.110792  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.111029  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:13:06.111281  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:13:06.111294  123193 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 11:13:06.224598  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395
	
	I0826 11:13:06.224636  123193 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:13:06.224952  123193 buildroot.go:166] provisioning hostname "ha-055395"
	I0826 11:13:06.224982  123193 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:13:06.225168  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.227866  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.228317  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.228351  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.228557  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.228791  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.228983  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.229119  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.229314  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:13:06.229485  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:13:06.229498  123193 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-055395 && echo "ha-055395" | sudo tee /etc/hostname
	I0826 11:13:06.362622  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-055395
	
	I0826 11:13:06.362660  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.365442  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.365874  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.365904  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.366107  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.366311  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.366482  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.366619  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.366793  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:13:06.366990  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:13:06.367007  123193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-055395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-055395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-055395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:13:06.475650  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:13:06.475683  123193 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:13:06.475706  123193 buildroot.go:174] setting up certificates
	I0826 11:13:06.475716  123193 provision.go:84] configureAuth start
	I0826 11:13:06.475729  123193 main.go:141] libmachine: (ha-055395) Calling .GetMachineName
	I0826 11:13:06.476021  123193 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:13:06.478775  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.479208  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.479237  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.479454  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.481720  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.482145  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.482168  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.482329  123193 provision.go:143] copyHostCerts
	I0826 11:13:06.482362  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:13:06.482401  123193 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:13:06.482420  123193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:13:06.482491  123193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:13:06.482565  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:13:06.482581  123193 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:13:06.482587  123193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:13:06.482609  123193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:13:06.482652  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:13:06.482669  123193 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:13:06.482675  123193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:13:06.482698  123193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:13:06.482743  123193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.ha-055395 san=[127.0.0.1 192.168.39.150 ha-055395 localhost minikube]
	I0826 11:13:06.542046  123193 provision.go:177] copyRemoteCerts
	I0826 11:13:06.542106  123193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:13:06.542129  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.545265  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.545674  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.545706  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.545933  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.546130  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.546253  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.546432  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:13:06.629241  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:13:06.629313  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:13:06.656782  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:13:06.656874  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0826 11:13:06.683314  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:13:06.683387  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 11:13:06.709769  123193 provision.go:87] duration metric: took 234.035583ms to configureAuth
	I0826 11:13:06.709807  123193 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:13:06.710064  123193 config.go:182] Loaded profile config "ha-055395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:13:06.710139  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:13:06.712949  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.713376  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:13:06.713407  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:13:06.713614  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:13:06.713827  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.713977  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:13:06.714096  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:13:06.714240  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:13:06.714438  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:13:06.714463  123193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:14:37.499599  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:14:37.499635  123193 machine.go:96] duration metric: took 1m31.393448396s to provisionDockerMachine
	I0826 11:14:37.499648  123193 start.go:293] postStartSetup for "ha-055395" (driver="kvm2")
	I0826 11:14:37.499659  123193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:14:37.499676  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.500016  123193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:14:37.500052  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.503073  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.503484  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.503510  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.503697  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.503910  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.504095  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.504255  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:14:37.585617  123193 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:14:37.589707  123193 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:14:37.589728  123193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:14:37.589794  123193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:14:37.589883  123193 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:14:37.589898  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:14:37.590009  123193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:14:37.598989  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:14:37.622120  123193 start.go:296] duration metric: took 122.453834ms for postStartSetup
	I0826 11:14:37.622178  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.622494  123193 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0826 11:14:37.622521  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.625236  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.625681  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.625703  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.625888  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.626073  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.626248  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.626429  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	W0826 11:14:37.704862  123193 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0826 11:14:37.704892  123193 fix.go:56] duration metric: took 1m31.621907358s for fixHost
	I0826 11:14:37.704919  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.707720  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.708155  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.708183  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.708361  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.708634  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.708844  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.709011  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.709158  123193 main.go:141] libmachine: Using SSH client type: native
	I0826 11:14:37.709332  123193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0826 11:14:37.709346  123193 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:14:37.811879  123193 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724670877.778682570
	
	I0826 11:14:37.811905  123193 fix.go:216] guest clock: 1724670877.778682570
	I0826 11:14:37.811916  123193 fix.go:229] Guest: 2024-08-26 11:14:37.77868257 +0000 UTC Remote: 2024-08-26 11:14:37.704904399 +0000 UTC m=+91.769863937 (delta=73.778171ms)
	I0826 11:14:37.811944  123193 fix.go:200] guest clock delta is within tolerance: 73.778171ms
	I0826 11:14:37.811952  123193 start.go:83] releasing machines lock for "ha-055395", held for 1m31.728983246s
	I0826 11:14:37.811977  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.812275  123193 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:14:37.814931  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.815336  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.815365  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.815609  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.816156  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.816374  123193 main.go:141] libmachine: (ha-055395) Calling .DriverName
	I0826 11:14:37.816485  123193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:14:37.816545  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.816615  123193 ssh_runner.go:195] Run: cat /version.json
	I0826 11:14:37.816644  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHHostname
	I0826 11:14:37.819278  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.819622  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.819646  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.819801  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.819832  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.820048  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.820191  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:37.820215  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.820217  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:37.820403  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHPort
	I0826 11:14:37.820467  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:14:37.820562  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHKeyPath
	I0826 11:14:37.820734  123193 main.go:141] libmachine: (ha-055395) Calling .GetSSHUsername
	I0826 11:14:37.820882  123193 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/ha-055395/id_rsa Username:docker}
	I0826 11:14:37.936654  123193 ssh_runner.go:195] Run: systemctl --version
	I0826 11:14:37.942875  123193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:14:38.100026  123193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:14:38.109666  123193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:14:38.109755  123193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:14:38.119060  123193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0826 11:14:38.119088  123193 start.go:495] detecting cgroup driver to use...
	I0826 11:14:38.119167  123193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:14:38.135424  123193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:14:38.149564  123193 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:14:38.149633  123193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:14:38.163506  123193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:14:38.177559  123193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:14:38.329098  123193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:14:38.484602  123193 docker.go:233] disabling docker service ...
	I0826 11:14:38.484704  123193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:14:38.503936  123193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:14:38.519093  123193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:14:38.690930  123193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:14:38.852272  123193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:14:38.867965  123193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:14:38.886491  123193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:14:38.886565  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.897487  123193 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:14:38.897554  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.908297  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.919147  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.929313  123193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:14:38.940137  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.950735  123193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.961394  123193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:14:38.972102  123193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:14:38.982223  123193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:14:38.992083  123193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:14:39.149744  123193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:14:41.775213  123193 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.625428001s)
	I0826 11:14:41.775252  123193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:14:41.775312  123193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:14:41.780065  123193 start.go:563] Will wait 60s for crictl version
	I0826 11:14:41.780139  123193 ssh_runner.go:195] Run: which crictl
	I0826 11:14:41.783817  123193 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:14:41.825379  123193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:14:41.825482  123193 ssh_runner.go:195] Run: crio --version
	I0826 11:14:41.854410  123193 ssh_runner.go:195] Run: crio --version
	I0826 11:14:41.886143  123193 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:14:41.887414  123193 main.go:141] libmachine: (ha-055395) Calling .GetIP
	I0826 11:14:41.890088  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:41.890460  123193 main.go:141] libmachine: (ha-055395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:82:8b", ip: ""} in network mk-ha-055395: {Iface:virbr1 ExpiryTime:2024-08-26 12:03:23 +0000 UTC Type:0 Mac:52:54:00:91:82:8b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-055395 Clientid:01:52:54:00:91:82:8b}
	I0826 11:14:41.890489  123193 main.go:141] libmachine: (ha-055395) DBG | domain ha-055395 has defined IP address 192.168.39.150 and MAC address 52:54:00:91:82:8b in network mk-ha-055395
	I0826 11:14:41.890699  123193 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:14:41.895155  123193 kubeadm.go:883] updating cluster {Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:14:41.895329  123193 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:14:41.895396  123193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:14:41.941359  123193 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:14:41.941392  123193 crio.go:433] Images already preloaded, skipping extraction
	I0826 11:14:41.941446  123193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:14:41.979061  123193 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:14:41.979093  123193 cache_images.go:84] Images are preloaded, skipping loading
	I0826 11:14:41.979107  123193 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.31.0 crio true true} ...
	I0826 11:14:41.979247  123193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-055395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:14:41.979335  123193 ssh_runner.go:195] Run: crio config
	I0826 11:14:42.035807  123193 cni.go:84] Creating CNI manager for ""
	I0826 11:14:42.035830  123193 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0826 11:14:42.035846  123193 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:14:42.035875  123193 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-055395 NodeName:ha-055395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 11:14:42.036062  123193 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-055395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:14:42.036085  123193 kube-vip.go:115] generating kube-vip config ...
	I0826 11:14:42.036139  123193 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0826 11:14:42.047736  123193 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0826 11:14:42.047869  123193 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0826 11:14:42.047934  123193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:14:42.058174  123193 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:14:42.058301  123193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0826 11:14:42.068035  123193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0826 11:14:42.084250  123193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:14:42.100649  123193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0826 11:14:42.117899  123193 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0826 11:14:42.136179  123193 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0826 11:14:42.140225  123193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:14:42.283952  123193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:14:42.298814  123193 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395 for IP: 192.168.39.150
	I0826 11:14:42.298860  123193 certs.go:194] generating shared ca certs ...
	I0826 11:14:42.298884  123193 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:14:42.299081  123193 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:14:42.299124  123193 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:14:42.299137  123193 certs.go:256] generating profile certs ...
	I0826 11:14:42.299215  123193 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/client.key
	I0826 11:14:42.299246  123193 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.aed7e715
	I0826 11:14:42.299283  123193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.aed7e715 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.55 192.168.39.209 192.168.39.254]
	I0826 11:14:42.471744  123193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.aed7e715 ...
	I0826 11:14:42.471780  123193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.aed7e715: {Name:mk5497018f8a9b324095792b91b09a556316831e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:14:42.471994  123193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.aed7e715 ...
	I0826 11:14:42.472013  123193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.aed7e715: {Name:mkfba1a7079200f67ef713b5dcc30c2d61c3cfee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:14:42.472121  123193 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt.aed7e715 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt
	I0826 11:14:42.472265  123193 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key.aed7e715 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key
	I0826 11:14:42.472393  123193 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key
	I0826 11:14:42.472410  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:14:42.472424  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:14:42.472437  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:14:42.472449  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:14:42.472462  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:14:42.472474  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:14:42.472493  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:14:42.472505  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:14:42.472556  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:14:42.472593  123193 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:14:42.472602  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:14:42.472625  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:14:42.472646  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:14:42.472669  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:14:42.472705  123193 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:14:42.472730  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:14:42.472743  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:14:42.472758  123193 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:14:42.473309  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:14:42.498721  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:14:42.523079  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:14:42.546565  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:14:42.571407  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0826 11:14:42.595852  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 11:14:42.627257  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:14:42.653266  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/ha-055395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:14:42.679000  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:14:42.702897  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:14:42.727009  123193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:14:42.750956  123193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:14:42.766630  123193 ssh_runner.go:195] Run: openssl version
	I0826 11:14:42.772257  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:14:42.782541  123193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:14:42.786909  123193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:14:42.786973  123193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:14:42.792540  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:14:42.801894  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:14:42.812704  123193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:14:42.816924  123193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:14:42.816982  123193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:14:42.822387  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:14:42.831896  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:14:42.843195  123193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:14:42.847811  123193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:14:42.847881  123193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:14:42.854063  123193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:14:42.864186  123193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:14:42.868556  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 11:14:42.874071  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 11:14:42.879702  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 11:14:42.885099  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 11:14:42.890770  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 11:14:42.896155  123193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 11:14:42.901577  123193 kubeadm.go:392] StartCluster: {Name:ha-055395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-055395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.209 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:14:42.901719  123193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:14:42.901768  123193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:14:42.939101  123193 cri.go:89] found id: "f089083a0f12732599bd9007e4e46787aacb1806485e186d536fc6c3c5c88b4b"
	I0826 11:14:42.939128  123193 cri.go:89] found id: "a0d4d655ef65a314578371d034d4b81675c6c98786e609ba4282e0490966cae8"
	I0826 11:14:42.939132  123193 cri.go:89] found id: "ff3194e112f6dde16694850256b28235cc541fdd6c157c015335202884411715"
	I0826 11:14:42.939135  123193 cri.go:89] found id: "80c1b2c3d22b0215c4e6ce214890fd441801844dbfb230aabeb34c3ba312f453"
	I0826 11:14:42.939142  123193 cri.go:89] found id: "588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9"
	I0826 11:14:42.939146  123193 cri.go:89] found id: "9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e"
	I0826 11:14:42.939148  123193 cri.go:89] found id: "d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8"
	I0826 11:14:42.939151  123193 cri.go:89] found id: "4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235"
	I0826 11:14:42.939153  123193 cri.go:89] found id: "d4490a4c3fa0bf200887734220562b508030f2b53f3eada01c0a43d343fc6b7e"
	I0826 11:14:42.939159  123193 cri.go:89] found id: "9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3"
	I0826 11:14:42.939176  123193 cri.go:89] found id: "9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5"
	I0826 11:14:42.939182  123193 cri.go:89] found id: "bcd57c7d0ba05fdd7c595f5f90e02ebdda2a002696e90cc54b1d131bb91f5a5b"
	I0826 11:14:42.939185  123193 cri.go:89] found id: "37bbfc44887fa79c6faa7f9f59e8c86801ae075d37438a5ed42dc8d9e48c91c5"
	I0826 11:14:42.939201  123193 cri.go:89] found id: ""
	I0826 11:14:42.939257  123193 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.204622840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86ccbe92-b8bc-4ca9-9bae-8712ece989ce name=/runtime.v1.RuntimeService/Version
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.206347963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc4076ce-c151-43b5-b4da-32d950bcded3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.206941327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671216206909659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc4076ce-c151-43b5-b4da-32d950bcded3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.207636938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c730395-ae0b-49ab-b3d2-5f3924e2b92b name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.207695923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c730395-ae0b-49ab-b3d2-5f3924e2b92b name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.208155384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e247f0cb6ee28ce6b07d70c7a8c38830b7a09011c3e9849f693b1521d15d043,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724671023270144542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670955273014946,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670932268629858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724670931283856157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928140b20a95ff0c3119d0653636e9b851e522ab99b70b2e483eafc1ec700be0,PodSandboxId:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670922603124539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724670921735330432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d7c2151209bf1a63dbae6f97269ff3721a08ead39cd8000600f9b104db4aa5,PodSandboxId:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724670904253364525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe,PodSandboxId:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889497200122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87,PodSandboxId:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724670889408031650,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031,PodSandboxId:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724670889403115181,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40,PodSandboxId:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889279681833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2,PodSandboxId:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670889132453741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724670889191908273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b,PodSandboxId:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670889129638837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724670388552334096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252441050731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252404278458,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724670240453596212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724670236587424536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724670224882013795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724670224829122561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c730395-ae0b-49ab-b3d2-5f3924e2b92b name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.264444359Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fededf21-e9c5-47e0-af58-cf2b1dfa4741 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.264581299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fededf21-e9c5-47e0-af58-cf2b1dfa4741 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.266574345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd09a3d0-ad7a-4b00-87a8-487e7a85fe7c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.267131380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671216267103866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd09a3d0-ad7a-4b00-87a8-487e7a85fe7c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.267997928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27e2c4cd-1946-45f3-95b4-f568fe9834ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.268060187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27e2c4cd-1946-45f3-95b4-f568fe9834ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.268514531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e247f0cb6ee28ce6b07d70c7a8c38830b7a09011c3e9849f693b1521d15d043,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724671023270144542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670955273014946,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670932268629858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724670931283856157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928140b20a95ff0c3119d0653636e9b851e522ab99b70b2e483eafc1ec700be0,PodSandboxId:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670922603124539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724670921735330432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d7c2151209bf1a63dbae6f97269ff3721a08ead39cd8000600f9b104db4aa5,PodSandboxId:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724670904253364525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe,PodSandboxId:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889497200122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87,PodSandboxId:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724670889408031650,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031,PodSandboxId:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724670889403115181,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40,PodSandboxId:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889279681833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2,PodSandboxId:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670889132453741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724670889191908273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b,PodSandboxId:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670889129638837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724670388552334096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252441050731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252404278458,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724670240453596212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724670236587424536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724670224882013795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724670224829122561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27e2c4cd-1946-45f3-95b4-f568fe9834ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.310212039Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b1ae723-8a2e-42ae-800d-bce5f8c4a805 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.310290003Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b1ae723-8a2e-42ae-800d-bce5f8c4a805 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.312286666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7cbf1c2-0197-4c12-8d5d-92e38a929412 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.313014855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671216312722205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7cbf1c2-0197-4c12-8d5d-92e38a929412 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.313607927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29fe3fd2-eef3-4ab3-b357-6a6ae59794ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.313663928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29fe3fd2-eef3-4ab3-b357-6a6ae59794ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.314175032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e247f0cb6ee28ce6b07d70c7a8c38830b7a09011c3e9849f693b1521d15d043,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724671023270144542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670955273014946,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670932268629858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724670931283856157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928140b20a95ff0c3119d0653636e9b851e522ab99b70b2e483eafc1ec700be0,PodSandboxId:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670922603124539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724670921735330432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d7c2151209bf1a63dbae6f97269ff3721a08ead39cd8000600f9b104db4aa5,PodSandboxId:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724670904253364525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe,PodSandboxId:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889497200122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87,PodSandboxId:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724670889408031650,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031,PodSandboxId:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724670889403115181,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40,PodSandboxId:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889279681833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2,PodSandboxId:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670889132453741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724670889191908273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b,PodSandboxId:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670889129638837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724670388552334096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252441050731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252404278458,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724670240453596212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724670236587424536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724670224882013795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724670224829122561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29fe3fd2-eef3-4ab3-b357-6a6ae59794ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.336075805Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c54b31ed-4f09-4cd0-90e3-2fb5ccb8c2bd name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.336487620Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-xh6vw,Uid:94adba85-441f-40d9-bcf2-616b1bd587dc,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670922442104225,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:06:25.269972071Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-055395,Uid:117688a49b29a25319916957d22e0f02,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1724670904136444157,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{kubernetes.io/config.hash: 117688a49b29a25319916957d22e0f02,kubernetes.io/config.seen: 2024-08-26T11:14:42.103685994Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-l9bd4,Uid:087dd322-a382-40bc-b631-5744d64ee6b6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888793470265,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08
-26T11:04:11.831733073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-nxb7s,Uid:80b1f99e-a6b9-452f-9e21-b0df08325d56,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888763669500,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:04:11.820427821Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-055395,Uid:a95c742b4bcd035455757fb1ce727265,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888740174784,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.150:8443,kubernetes.io/config.hash: a95c742b4bcd035455757fb1ce727265,kubernetes.io/config.seen: 2024-08-26T11:03:51.194875358Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-055395,Uid:522e8ac6d862b80ec2c639537cb631fc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888692833689,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,tier: control-pl
ane,},Annotations:map[string]string{kubernetes.io/config.hash: 522e8ac6d862b80ec2c639537cb631fc,kubernetes.io/config.seen: 2024-08-26T11:03:51.194877667Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&PodSandboxMetadata{Name:kube-proxy-g45pb,Uid:0e2dc897-60b1-4d06-a4e4-30136a39a224,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888683230242,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:03:56.047841863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&PodSandboxMetadata{Name:kindnet-z2rh2,Uid:
f1df8e80-62b7-4a0a-b61a-135b907c101d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888677247984,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:03:56.029429490Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&PodSandboxMetadata{Name:etcd-ha-055395,Uid:b978ad17f61285a0ca9fb6b555e7f874,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888676938913,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f
874,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.150:2379,kubernetes.io/config.hash: b978ad17f61285a0ca9fb6b555e7f874,kubernetes.io/config.seen: 2024-08-26T11:03:51.194871126Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5bf3fea9-2562-4769-944b-72472da24419,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888672973218,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addon
manager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-26T11:04:11.829164359Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-055395,Uid:1a459e34c23e31a6f2bf5b0dabb01c6a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724670888664479071,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1a459e34c23e31a6f2bf5b0dabb01c6a,kubernetes.io/config.seen: 2024-08-26T11:03:51.194876657Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-xh6vw,Uid:94adba85-441f-40d9-bcf2-616b1bd587dc,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724670385885870971,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:06:25.269972071Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-l9bd4,Uid:087dd322-a382-40bc-b631-5744d64ee6b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724670252149590284,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:04:11.831733073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-nxb7s,Uid:80b1f99e-a6b9-452f-9e21-b0df08325d56,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724670252128170238,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:04:11.820427821Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&PodSandboxMetadata{Name:kube-proxy-g45pb,Uid:0e2dc897-60b1-4d06-a4e4-30136a39a224,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724670236354056887,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:03:56.047841863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&PodSandboxMetadata{Name:kindnet-z2rh2,Uid:f1df8e80-62b7-4a0a-b61a-135b907c101d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724670236336930048,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:03:56.029429490Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-055395,Uid:522e8ac6d862b80ec2c639537cb631fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724670224576684114,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 522e8ac6d862b80ec2c639537cb631fc,kubernetes.io/config.seen: 2024-08-26T11:03:44.066392096Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&PodSandboxMetadata{Name:etcd-ha-055395,Uid:b978ad17f61285a0ca9fb6b555e7f874,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724670224543403050,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.150:2379,kubernetes.io/config.hash: b978ad17
f61285a0ca9fb6b555e7f874,kubernetes.io/config.seen: 2024-08-26T11:03:44.066388051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c54b31ed-4f09-4cd0-90e3-2fb5ccb8c2bd name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.337745002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f572c060-65a4-40e6-97d4-c9dd2733c9f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.337841601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f572c060-65a4-40e6-97d4-c9dd2733c9f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:20:16 ha-055395 crio[3672]: time="2024-08-26 11:20:16.343380936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e247f0cb6ee28ce6b07d70c7a8c38830b7a09011c3e9849f693b1521d15d043,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724671023270144542,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724670955273014946,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724670932268629858,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b429292e658d338322eadb9d3c2ed5d26ae18097f78d4e7fe8c6e175d646525,PodSandboxId:5481856a84f015038bd80b64712deb0c30f92c087ad6edfdf191d5b1ede31d3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724670931283856157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf3fea9-2562-4769-944b-72472da24419,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928140b20a95ff0c3119d0653636e9b851e522ab99b70b2e483eafc1ec700be0,PodSandboxId:2b5049e4e5b5fbd9338f8f10756268555c790dd78d7db0f86c06ceb3a29dd4c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724670922603124539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00,PodSandboxId:7db9b4dfb41b55d17342111bb32dd47a444220d2ecbe5351afcc91ac43d038bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724670921735330432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a459e34c23e31a6f2bf5b0dabb01c6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d7c2151209bf1a63dbae6f97269ff3721a08ead39cd8000600f9b104db4aa5,PodSandboxId:a7119d535718657c402279fda8ceb579d99a982f2420024ad99e24bfbc9411fe,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724670904253364525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 117688a49b29a25319916957d22e0f02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe,PodSandboxId:575df53facd27dbeee36b055810ba60ba3949be22baa967e80731c5ef260ba4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889497200122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87,PodSandboxId:e70212335fe5766603a16fa81314bbcc16eb008065322f04cc45a538bc12eb98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724670889408031650,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031,PodSandboxId:f6eebe19a373fa5ebd5ae2557f05dd3f94626fd5ad034e5245016c3e589837e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724670889403115181,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40,PodSandboxId:926669bae3cfed57a466bc291acf8d84250f015bdb9b66afb3138bb28737d0c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724670889279681833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2,PodSandboxId:39d4eeb3baee11663b6670b8f3d31ff2a3154467a73c1daff6539d652d9288ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724670889132453741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d,PodSandboxId:a5997cca5dc2f5479186d6c23c5ef869fec329e79cfe5c8e0f1a5370324cb852,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724670889191908273,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a95c742b4bcd035455757fb1ce727265,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b,PodSandboxId:d2b709b0b3cf00ce35dfb0031ad6eee8a800afe9cf4d38a7ca5967d575153892,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724670889129638837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Anno
tations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f106e1bd830cc46024bce50463f31b85e297b1b20390e93f374a0f68beb057,PodSandboxId:a356e619a2186edc0ebe51e08fd4aaeb48b06a4e321ecc61b2396f00c1e268a4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724670388552334096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xh6vw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94adba85-441f-40d9-bcf2-616b1bd587dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9,PodSandboxId:73e7528d83ce5bd1c17839881908fbf1f080511f7b67d594c01ea7a9fb81ffde,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252441050731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nxb7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80b1f99e-a6b9-452f-9e21-b0df08325d56,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e,PodSandboxId:3593c2f74b608d7e49066e1273d5dcaa7d9cb304573c7ed09b8d26993daffd91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724670252404278458,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-l9bd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087dd322-a382-40bc-b631-5744d64ee6b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8,PodSandboxId:3f092331272f78a830e876e2b85540c027e1750c1ebaca756323878bb696f52e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724670240453596212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z2rh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1df8e80-62b7-4a0a-b61a-135b907c101d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235,PodSandboxId:dd6c20478efce0faca3555fc7f945465f86fadf4614a66e2ef2040621fbea877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724670236587424536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g45pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e2dc897-60b1-4d06-a4e4-30136a39a224,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3,PodSandboxId:d03f2374626725a15f97407706ca6df6f8ac4f9b8ceb87304d29b11b757765a7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724670224882013795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522e8ac6d862b80ec2c639537cb631fc,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5,PodSandboxId:40a84124456a3a83a830cc891ae6f90508d8ccaa159d886242abc181eef7d160,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724670224829122561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-055395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b978ad17f61285a0ca9fb6b555e7f874,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f572c060-65a4-40e6-97d4-c9dd2733c9f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8e247f0cb6ee2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   5481856a84f01       storage-provisioner
	4735d890e73b4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   3                   7db9b4dfb41b5       kube-controller-manager-ha-055395
	f8b101352f735       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   a5997cca5dc2f       kube-apiserver-ha-055395
	9b429292e658d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       4                   5481856a84f01       storage-provisioner
	928140b20a95f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   2b5049e4e5b5f       busybox-7dff88458-xh6vw
	8e71c83fab111       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Exited              kube-controller-manager   2                   7db9b4dfb41b5       kube-controller-manager-ha-055395
	f2d7c2151209b       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   a7119d5357186       kube-vip-ha-055395
	07dedbd1eb60f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   575df53facd27       coredns-6f6b679f8f-l9bd4
	1e9ddffb81c9f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   e70212335fe57       kube-proxy-g45pb
	79c290adde24b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   f6eebe19a373f       kindnet-z2rh2
	9e2b5b7689208       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   926669bae3cfe       coredns-6f6b679f8f-nxb7s
	938e88cf27c38       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   a5997cca5dc2f       kube-apiserver-ha-055395
	49b2d6a852b11       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   39d4eeb3baee1       kube-scheduler-ha-055395
	113af412b49ca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   d2b709b0b3cf0       etcd-ha-055395
	d2f106e1bd830       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   a356e619a2186       busybox-7dff88458-xh6vw
	588201165ca01       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   73e7528d83ce5       coredns-6f6b679f8f-nxb7s
	9fdad1c79bb41       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   3593c2f74b608       coredns-6f6b679f8f-l9bd4
	d5ffe25b55c8a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   3f092331272f7       kindnet-z2rh2
	4518376ec7b4a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   dd6c20478efce       kube-proxy-g45pb
	9f71e1964ec11       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   d03f237462672       kube-scheduler-ha-055395
	9500eb08ad452       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   40a84124456a3       etcd-ha-055395
	
	
	==> coredns [07dedbd1eb60f8e759143b687d4f1af13b6e2541e608ce0ba78eef6a963789fe] <==
	Trace[290535648]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (11:15:04.633)
	Trace[290535648]: [10.002169458s] [10.002169458s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [588201165ca01bd25f8758f6f0c3800b0f29ed0e3be52ea6337a5995fdcc0bd9] <==
	[INFO] 10.244.0.4:49284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199959s
	[INFO] 10.244.3.2:38694 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112066s
	[INFO] 10.244.3.2:55559 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116423s
	[INFO] 10.244.1.2:38712 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000274813s
	[INFO] 10.244.1.2:38536 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091302s
	[INFO] 10.244.0.4:35805 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089054s
	[INFO] 10.244.0.4:53560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109072s
	[INFO] 10.244.0.4:50886 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061358s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1853&timeout=7m19s&timeoutSeconds=439&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1852&timeout=5m33s&timeoutSeconds=333&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9e2b5b7689208f59873acaf365bbf5820f5bb96fb16c0f5c84e0c0db1c638a40] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1913791900]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (26-Aug-2024 11:14:58.020) (total time: 10000ms):
	Trace[1913791900]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:15:08.021)
	Trace[1913791900]: [10.000976505s] [10.000976505s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41626->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41626->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9fdad1c79bb41605a5e68e7df50167f6e2713b75f6db57bde9e909e250e7287e] <==
	[INFO] 10.244.0.4:57644 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001630694s
	[INFO] 10.244.3.2:35262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118796s
	[INFO] 10.244.3.2:56831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004452512s
	[INFO] 10.244.3.2:50141 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195651s
	[INFO] 10.244.3.2:52724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157926s
	[INFO] 10.244.3.2:48168 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135307s
	[INFO] 10.244.1.2:49021 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099106s
	[INFO] 10.244.0.4:33653 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173931s
	[INFO] 10.244.0.4:49095 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089973s
	[INFO] 10.244.1.2:60072 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132366s
	[INFO] 10.244.1.2:45712 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081817s
	[INFO] 10.244.1.2:47110 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082159s
	[INFO] 10.244.0.4:48619 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100933s
	[INFO] 10.244.0.4:37358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069397s
	[INFO] 10.244.0.4:46981 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092796s
	[INFO] 10.244.3.2:59777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240921s
	[INFO] 10.244.3.2:44319 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002198s
	[INFO] 10.244.1.2:48438 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216864s
	[INFO] 10.244.1.2:45176 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133331s
	[INFO] 10.244.0.4:41108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112163s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1853": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-055395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_03_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:03:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:20:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:15:30 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:15:30 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:15:30 +0000   Mon, 26 Aug 2024 11:03:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:15:30 +0000   Mon, 26 Aug 2024 11:04:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-055395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 68841a7ef08f47a386553bd433710191
	  System UUID:                68841a7e-f08f-47a3-8655-3bd433710191
	  Boot ID:                    be93c222-ff08-41d5-baae-cb87ba3b44cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xh6vw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-l9bd4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-nxb7s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-055395                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-z2rh2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-055395             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-055395    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-g45pb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-055395             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-055395                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m42s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-055395 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-055395 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-055395 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-055395 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Warning  ContainerGCFailed        6m25s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m48s (x3 over 6m38s)  kubelet          Node ha-055395 status is now: NodeNotReady
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-055395 event: Registered Node ha-055395 in Controller
	
	
	Name:               ha-055395-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_04_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:04:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:20:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:16:14 +0000   Mon, 26 Aug 2024 11:15:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:16:14 +0000   Mon, 26 Aug 2024 11:15:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:16:14 +0000   Mon, 26 Aug 2024 11:15:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:16:14 +0000   Mon, 26 Aug 2024 11:15:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ha-055395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9151de0e0e3545e983307f4ed75379a4
	  System UUID:                9151de0e-0e35-45e9-8330-7f4ed75379a4
	  Boot ID:                    50ba8f5e-7d65-475d-ad47-4d0ae2236d0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gbwm6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-055395-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-js2cb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-055395-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-055395-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-zl5bm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-055395-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-055395-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-055395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     15m                    cidrAllocator    Node ha-055395-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-055395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-055395-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-055395-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    5m11s (x8 over 5m11s)  kubelet          Node ha-055395-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node ha-055395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node ha-055395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-055395-m02 event: Registered Node ha-055395-m02 in Controller
	
	
	Name:               ha-055395-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-055395-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=ha-055395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_07_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:07:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-055395-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:17:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 26 Aug 2024 11:17:29 +0000   Mon, 26 Aug 2024 11:18:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 26 Aug 2024 11:17:29 +0000   Mon, 26 Aug 2024 11:18:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 26 Aug 2024 11:17:29 +0000   Mon, 26 Aug 2024 11:18:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 26 Aug 2024 11:17:29 +0000   Mon, 26 Aug 2024 11:18:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-055395-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fad8927c4194cf6a2bc5a5e286dfbd0
	  System UUID:                0fad8927-c419-4cf6-a2bc-5a5e286dfbd0
	  Boot ID:                    42cb7836-fe18-42d4-950b-8712451bd9c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7fhl4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-n4gpg              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-758wf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-055395-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-055395-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           13m                    node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-055395-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-055395-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-055395-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-055395-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   NodeNotReady             4m7s                   node-controller  Node ha-055395-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-055395-m04 event: Registered Node ha-055395-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-055395-m04 has been rebooted, boot id: 42cb7836-fe18-42d4-950b-8712451bd9c6
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-055395-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-055395-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-055395-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m47s                  kubelet          Node ha-055395-m04 status is now: NodeReady
	  Normal   NodeNotReady             103s                   node-controller  Node ha-055395-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.063641] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061452] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.165458] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.147926] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.278562] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.051395] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.884243] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.058746] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.395019] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.102683] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.458120] kauditd_printk_skb: 21 callbacks suppressed
	[Aug26 11:04] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.777933] kauditd_printk_skb: 24 callbacks suppressed
	[Aug26 11:11] kauditd_printk_skb: 1 callbacks suppressed
	[Aug26 11:14] systemd-fstab-generator[3593]: Ignoring "noauto" option for root device
	[  +0.146773] systemd-fstab-generator[3605]: Ignoring "noauto" option for root device
	[  +0.200928] systemd-fstab-generator[3619]: Ignoring "noauto" option for root device
	[  +0.172960] systemd-fstab-generator[3631]: Ignoring "noauto" option for root device
	[  +0.287553] systemd-fstab-generator[3659]: Ignoring "noauto" option for root device
	[  +3.147898] systemd-fstab-generator[3763]: Ignoring "noauto" option for root device
	[  +6.512708] kauditd_printk_skb: 122 callbacks suppressed
	[Aug26 11:15] kauditd_printk_skb: 87 callbacks suppressed
	[ +27.136017] kauditd_printk_skb: 5 callbacks suppressed
	[ +20.002082] kauditd_printk_skb: 8 callbacks suppressed
	[Aug26 11:16] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [113af412b49ca5785d1e1a1b69e58d9417bd323cd53e7a4df54ce4f15bcbde0b] <==
	{"level":"info","ts":"2024-08-26T11:16:52.088347Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2236e2deb63504cb","to":"ee6a5229deeda489","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-26T11:16:52.088460Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:16:52.098866Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2236e2deb63504cb","to":"ee6a5229deeda489","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-26T11:16:52.098990Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"warn","ts":"2024-08-26T11:16:52.122539Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.209:43384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-26T11:16:53.847064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.645908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:439"}
	{"level":"info","ts":"2024-08-26T11:16:53.847290Z","caller":"traceutil/trace.go:171","msg":"trace[162693067] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2418; }","duration":"106.896263ms","start":"2024-08-26T11:16:53.740366Z","end":"2024-08-26T11:16:53.847262Z","steps":["trace[162693067] 'range keys from in-memory index tree'  (duration: 105.406517ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:16:56.131628Z","caller":"traceutil/trace.go:171","msg":"trace[1764644837] transaction","detail":"{read_only:false; response_revision:2427; number_of_response:1; }","duration":"120.017838ms","start":"2024-08-26T11:16:56.011586Z","end":"2024-08-26T11:16:56.131604Z","steps":["trace[1764644837] 'process raft request'  (duration: 119.86995ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:17:43.177289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb switched to configuration voters=(2465407292199470283 13739999924982367462)"}
	{"level":"info","ts":"2024-08-26T11:17:43.181020Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","removed-remote-peer-id":"ee6a5229deeda489","removed-remote-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2024-08-26T11:17:43.181147Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ee6a5229deeda489"}
	{"level":"warn","ts":"2024-08-26T11:17:43.181846Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:17:43.181946Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ee6a5229deeda489"}
	{"level":"warn","ts":"2024-08-26T11:17:43.182662Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:17:43.183173Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:17:43.185879Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"warn","ts":"2024-08-26T11:17:43.186247Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489","error":"context canceled"}
	{"level":"warn","ts":"2024-08-26T11:17:43.186352Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ee6a5229deeda489","error":"failed to read ee6a5229deeda489 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-26T11:17:43.186456Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"warn","ts":"2024-08-26T11:17:43.186820Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489","error":"context canceled"}
	{"level":"info","ts":"2024-08-26T11:17:43.186855Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:17:43.186932Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:17:43.186948Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"2236e2deb63504cb","removed-remote-peer-id":"ee6a5229deeda489"}
	{"level":"warn","ts":"2024-08-26T11:17:43.191823Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"2236e2deb63504cb","remote-peer-id-stream-handler":"2236e2deb63504cb","remote-peer-id-from":"ee6a5229deeda489"}
	{"level":"warn","ts":"2024-08-26T11:17:43.196655Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.209:37098","server-name":"","error":"EOF"}
	
	
	==> etcd [9500eb08ad45265078c4f7b763e262cd7cbe6362393237c991963f7e378558b5] <==
	2024/08/26 11:13:06 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/26 11:13:06 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-26T11:13:06.898911Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.150:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T11:13:06.899101Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.150:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-26T11:13:06.899231Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"2236e2deb63504cb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-26T11:13:06.899442Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899542Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899648Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899817Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899920Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.899988Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.900002Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"beae49225677c4e6"}
	{"level":"info","ts":"2024-08-26T11:13:06.900008Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900019Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900042Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900161Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900211Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900257Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.900279Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ee6a5229deeda489"}
	{"level":"info","ts":"2024-08-26T11:13:06.902599Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"warn","ts":"2024-08-26T11:13:06.902616Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.98200491s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-26T11:13:06.902739Z","caller":"traceutil/trace.go:171","msg":"trace[855635748] range","detail":"{range_begin:; range_end:; }","duration":"8.982145437s","start":"2024-08-26T11:12:57.920585Z","end":"2024-08-26T11:13:06.902730Z","steps":["trace[855635748] 'agreement among raft nodes before linearized reading'  (duration: 8.982003038s)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:13:06.902859Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-08-26T11:13:06.902947Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-055395","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"]}
	{"level":"error","ts":"2024-08-26T11:13:06.902848Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 11:20:16 up 17 min,  0 users,  load average: 0.24, 0.38, 0.28
	Linux ha-055395 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [79c290adde24b34640f13c58347d75c72600720b2294120e14b05f13a83bd031] <==
	I0826 11:19:30.637562       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:19:40.644059       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:19:40.644129       1 main.go:299] handling current node
	I0826 11:19:40.644224       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:19:40.644239       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:19:40.644473       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:19:40.644511       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:19:50.634556       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:19:50.634623       1 main.go:299] handling current node
	I0826 11:19:50.634649       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:19:50.634655       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:19:50.634863       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:19:50.634886       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:20:00.635084       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:20:00.635324       1 main.go:299] handling current node
	I0826 11:20:00.635421       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:20:00.635449       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:20:00.635713       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:20:00.635745       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:20:10.643189       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:20:10.643272       1 main.go:299] handling current node
	I0826 11:20:10.643305       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:20:10.643311       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:20:10.643462       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:20:10.643483       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [d5ffe25b55c8a970cb91605bce3a70b4a43c8bb37881610dabce9530f9f93ca8] <==
	I0826 11:12:31.525192       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:12:41.524352       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:12:41.524497       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:12:41.524736       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:12:41.525338       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:12:41.525488       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:12:41.525535       1 main.go:299] handling current node
	I0826 11:12:41.525562       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:12:41.525580       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:12:51.529853       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:12:51.529913       1 main.go:299] handling current node
	I0826 11:12:51.529937       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:12:51.529944       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:12:51.530251       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:12:51.530292       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:12:51.530420       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:12:51.530442       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	I0826 11:13:01.524395       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0826 11:13:01.524517       1 main.go:299] handling current node
	I0826 11:13:01.524551       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0826 11:13:01.524613       1 main.go:322] Node ha-055395-m02 has CIDR [10.244.1.0/24] 
	I0826 11:13:01.524876       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0826 11:13:01.525001       1 main.go:322] Node ha-055395-m03 has CIDR [10.244.3.0/24] 
	I0826 11:13:01.525218       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0826 11:13:01.525269       1 main.go:322] Node ha-055395-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [938e88cf27c384000e20d12c741a467da77beac00460392bd5df149b375c820d] <==
	I0826 11:14:49.999068       1 options.go:228] external host was not specified, using 192.168.39.150
	I0826 11:14:50.001120       1 server.go:142] Version: v1.31.0
	I0826 11:14:50.001169       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:14:50.471885       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0826 11:14:50.487844       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 11:14:50.493690       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0826 11:14:50.493814       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0826 11:14:50.494136       1 instance.go:232] Using reconciler: lease
	W0826 11:15:10.471352       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0826 11:15:10.471353       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0826 11:15:10.495265       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0826 11:15:10.495407       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f8b101352f735a7f1636cb0c8678e099da5a2a9c9a2e1367728c617d582ff66b] <==
	I0826 11:15:34.598342       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0826 11:15:34.599418       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0826 11:15:34.684318       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0826 11:15:34.684360       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0826 11:15:34.684941       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0826 11:15:34.685375       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0826 11:15:34.688465       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0826 11:15:34.688587       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 11:15:34.689120       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0826 11:15:34.689202       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 11:15:34.689229       1 policy_source.go:224] refreshing policies
	I0826 11:15:34.712645       1 shared_informer.go:320] Caches are synced for configmaps
	I0826 11:15:34.723478       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0826 11:15:34.731022       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0826 11:15:34.731619       1 aggregator.go:171] initial CRD sync complete...
	I0826 11:15:34.731713       1 autoregister_controller.go:144] Starting autoregister controller
	I0826 11:15:34.731778       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0826 11:15:34.731811       1 cache.go:39] Caches are synced for autoregister controller
	I0826 11:15:34.741937       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0826 11:15:34.770656       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.55]
	I0826 11:15:34.772664       1 controller.go:615] quota admission added evaluator for: endpoints
	I0826 11:15:34.785266       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0826 11:15:34.789149       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0826 11:15:35.605372       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0826 11:15:36.109940       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.55]
	
	
	==> kube-controller-manager [4735d890e73b4e07df52152143c949c0989aed4a278911f702640dcf9f15069b] <==
	I0826 11:18:33.087593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:18:33.110866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:18:33.173256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.24375ms"
	I0826 11:18:33.173870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="167.743µs"
	I0826 11:18:34.754521       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	E0826 11:18:38.039444       1 gc_controller.go:151] "Failed to get node" err="node \"ha-055395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-055395-m03"
	E0826 11:18:38.039499       1 gc_controller.go:151] "Failed to get node" err="node \"ha-055395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-055395-m03"
	E0826 11:18:38.039508       1 gc_controller.go:151] "Failed to get node" err="node \"ha-055395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-055395-m03"
	E0826 11:18:38.039515       1 gc_controller.go:151] "Failed to get node" err="node \"ha-055395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-055395-m03"
	E0826 11:18:38.039520       1 gc_controller.go:151] "Failed to get node" err="node \"ha-055395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-055395-m03"
	I0826 11:18:38.052826       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-055395-m03"
	I0826 11:18:38.092238       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-055395-m03"
	I0826 11:18:38.092296       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-52vmd"
	I0826 11:18:38.124573       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-52vmd"
	I0826 11:18:38.124694       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-055395-m03"
	I0826 11:18:38.160076       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-055395-m03"
	I0826 11:18:38.160160       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wnz4m"
	I0826 11:18:38.188971       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wnz4m"
	I0826 11:18:38.189074       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-055395-m03"
	I0826 11:18:38.214280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-055395-m04"
	I0826 11:18:38.227711       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-055395-m03"
	I0826 11:18:38.227858       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-055395-m03"
	I0826 11:18:38.273512       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-055395-m03"
	I0826 11:18:38.273545       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-055395-m03"
	I0826 11:18:38.303801       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-055395-m03"
	
	
	==> kube-controller-manager [8e71c83fab111d0f5891f43233f3726bd85df954e8ea20ff3dcadb8b18d2cd00] <==
	I0826 11:15:22.404365       1 serving.go:386] Generated self-signed cert in-memory
	I0826 11:15:22.698175       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0826 11:15:22.698258       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:15:22.699693       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0826 11:15:22.699853       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0826 11:15:22.699857       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0826 11:15:22.699989       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0826 11:15:34.736402       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [1e9ddffb81c9fbcee8fab265f708c04f35c0212645996cfde91891541a2fbc87] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 11:14:51.837225       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0826 11:14:54.911378       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0826 11:14:57.982633       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0826 11:15:04.127506       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0826 11:15:16.414091       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-055395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0826 11:15:33.785700       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0826 11:15:33.786643       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 11:15:33.860650       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 11:15:33.860712       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 11:15:33.860880       1 server_linux.go:169] "Using iptables Proxier"
	I0826 11:15:33.875393       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 11:15:33.876061       1 server.go:483] "Version info" version="v1.31.0"
	I0826 11:15:33.876446       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:15:33.883254       1 config.go:197] "Starting service config controller"
	I0826 11:15:33.883482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 11:15:33.883642       1 config.go:104] "Starting endpoint slice config controller"
	I0826 11:15:33.883841       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 11:15:33.888649       1 config.go:326] "Starting node config controller"
	I0826 11:15:33.888818       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 11:15:33.984595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 11:15:33.984692       1 shared_informer.go:320] Caches are synced for service config
	I0826 11:15:33.990830       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4518376ec7b4a70c98fd37cb3ab1f57d8cc04a87125a276e5aac9ab69e60a235] <==
	E0826 11:11:59.808594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:02.879327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:02.879393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:02.879474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:02.879506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:02.879562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:02.879597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:09.022212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:09.022340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:09.022454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:09.022484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:12.094560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:12.094739       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:18.240312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:18.240364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:21.309241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:21.309300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:27.453984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:27.454059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:33.598109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:33.598249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-055395&resourceVersion=1828\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:36.669512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:36.669591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0826 11:12:42.814640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0826 11:12:42.815109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [49b2d6a852b117e8cab79935d5019ab83901813b629fd3604b1f4f4c84ca70d2] <==
	W0826 11:15:28.849520       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.150:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:28.849636       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.150:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:28.909835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.150:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:28.910452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.150:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:29.489160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.150:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:29.489280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.150:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:29.785021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.150:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:29.785086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.150:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:29.816022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.150:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:29.816116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.150:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:30.959850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.150:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:30.959904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.150:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:31.075516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.150:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:31.075641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.150:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:31.122250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.150:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:31.122336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.150:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:31.158329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.150:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:31.158455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.150:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:31.463032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.150:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.150:8443: connect: connection refused
	E0826 11:15:31.463076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.150:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.150:8443: connect: connection refused" logger="UnhandledError"
	W0826 11:15:34.607595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 11:15:34.607718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 11:15:34.607813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 11:15:34.607846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0826 11:15:52.607734       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9f71e1964ec116c4bda1b5ed4148a0b7e5bf23abfc4762665c1a95cc04ba88a3] <==
	E0826 11:07:03.708838       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kkwxm\": pod kube-proxy-kkwxm is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kkwxm" node="ha-055395-m04"
	E0826 11:07:03.708887       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2ef2b044-3278-43d7-8164-a8b51d7f9424(kube-system/kube-proxy-kkwxm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-kkwxm"
	E0826 11:07:03.708901       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kkwxm\": pod kube-proxy-kkwxm is already assigned to node \"ha-055395-m04\"" pod="kube-system/kube-proxy-kkwxm"
	I0826 11:07:03.708919       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kkwxm" node="ha-055395-m04"
	E0826 11:07:03.709603       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ww4xl\": pod kindnet-ww4xl is already assigned to node \"ha-055395-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ww4xl" node="ha-055395-m04"
	E0826 11:07:03.711019       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 45edff34-de36-493a-9dba-b74e8a326787(kube-system/kindnet-ww4xl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ww4xl"
	E0826 11:07:03.711136       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ww4xl\": pod kindnet-ww4xl is already assigned to node \"ha-055395-m04\"" pod="kube-system/kindnet-ww4xl"
	I0826 11:07:03.711360       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ww4xl" node="ha-055395-m04"
	E0826 11:12:57.561998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0826 11:12:57.601152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0826 11:12:57.615901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0826 11:12:59.013332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0826 11:12:59.065274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0826 11:13:00.486614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0826 11:13:00.895670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0826 11:13:01.774544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0826 11:13:02.289505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0826 11:13:03.749963       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0826 11:13:04.910651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0826 11:13:05.980267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0826 11:13:06.778269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	I0826 11:13:06.818673       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0826 11:13:06.819011       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0826 11:13:06.819229       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0826 11:13:06.819577       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 26 11:18:51 ha-055395 kubelet[1329]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:18:51 ha-055395 kubelet[1329]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:18:51 ha-055395 kubelet[1329]: E0826 11:18:51.521421    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671131520803236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:18:51 ha-055395 kubelet[1329]: E0826 11:18:51.521491    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671131520803236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:01 ha-055395 kubelet[1329]: E0826 11:19:01.523328    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671141522900764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:01 ha-055395 kubelet[1329]: E0826 11:19:01.523793    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671141522900764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:11 ha-055395 kubelet[1329]: E0826 11:19:11.525640    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671151525296877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:11 ha-055395 kubelet[1329]: E0826 11:19:11.525714    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671151525296877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:21 ha-055395 kubelet[1329]: E0826 11:19:21.527784    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671161527254463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:21 ha-055395 kubelet[1329]: E0826 11:19:21.528107    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671161527254463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:31 ha-055395 kubelet[1329]: E0826 11:19:31.530307    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671171529970118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:31 ha-055395 kubelet[1329]: E0826 11:19:31.530353    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671171529970118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:41 ha-055395 kubelet[1329]: E0826 11:19:41.532892    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671181532422510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:41 ha-055395 kubelet[1329]: E0826 11:19:41.532928    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671181532422510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:51 ha-055395 kubelet[1329]: E0826 11:19:51.275097    1329 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 11:19:51 ha-055395 kubelet[1329]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:19:51 ha-055395 kubelet[1329]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:19:51 ha-055395 kubelet[1329]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:19:51 ha-055395 kubelet[1329]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:19:51 ha-055395 kubelet[1329]: E0826 11:19:51.534617    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671191534061668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:19:51 ha-055395 kubelet[1329]: E0826 11:19:51.534712    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671191534061668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:20:01 ha-055395 kubelet[1329]: E0826 11:20:01.538057    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671201537362469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:20:01 ha-055395 kubelet[1329]: E0826 11:20:01.538395    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671201537362469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:20:11 ha-055395 kubelet[1329]: E0826 11:20:11.540061    1329 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671211539665051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:20:11 ha-055395 kubelet[1329]: E0826 11:20:11.540529    1329 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724671211539665051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 11:20:15.873343  126041 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19501-99403/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-055395 -n ha-055395
helpers_test.go:261: (dbg) Run:  kubectl --context ha-055395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-523807
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-523807
E0826 11:37:20.477814  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-523807: exit status 82 (2m1.820424344s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-523807-m03"  ...
	* Stopping node "multinode-523807-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-523807" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-523807 --wait=true -v=8 --alsologtostderr
E0826 11:39:34.329862  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:40:23.547943  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-523807 --wait=true -v=8 --alsologtostderr: (3m24.464729843s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-523807
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-523807 -n multinode-523807
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-523807 logs -n 25: (1.499872844s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m02:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4218272271/001/cp-test_multinode-523807-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m02:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807:/home/docker/cp-test_multinode-523807-m02_multinode-523807.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807 sudo cat                                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m02_multinode-523807.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m02:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03:/home/docker/cp-test_multinode-523807-m02_multinode-523807-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807-m03 sudo cat                                   | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m02_multinode-523807-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp testdata/cp-test.txt                                                | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4218272271/001/cp-test_multinode-523807-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807:/home/docker/cp-test_multinode-523807-m03_multinode-523807.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807 sudo cat                                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m03_multinode-523807.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02:/home/docker/cp-test_multinode-523807-m03_multinode-523807-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807-m02 sudo cat                                   | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m03_multinode-523807-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-523807 node stop m03                                                          | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	| node    | multinode-523807 node start                                                             | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-523807                                                                | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC |                     |
	| stop    | -p multinode-523807                                                                     | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC |                     |
	| start   | -p multinode-523807                                                                     | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:38 UTC | 26 Aug 24 11:42 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-523807                                                                | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:42 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 11:38:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 11:38:53.790302  135795 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:38:53.790428  135795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:38:53.790438  135795 out.go:358] Setting ErrFile to fd 2...
	I0826 11:38:53.790442  135795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:38:53.790637  135795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:38:53.791220  135795 out.go:352] Setting JSON to false
	I0826 11:38:53.792234  135795 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4875,"bootTime":1724667459,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:38:53.792299  135795 start.go:139] virtualization: kvm guest
	I0826 11:38:53.794654  135795 out.go:177] * [multinode-523807] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:38:53.797106  135795 notify.go:220] Checking for updates...
	I0826 11:38:53.797127  135795 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:38:53.798967  135795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:38:53.800673  135795 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:38:53.802392  135795 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:38:53.804081  135795 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:38:53.805565  135795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:38:53.807438  135795 config.go:182] Loaded profile config "multinode-523807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:38:53.807564  135795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:38:53.807996  135795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:38:53.808069  135795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:38:53.823890  135795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0826 11:38:53.824378  135795 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:38:53.825044  135795 main.go:141] libmachine: Using API Version  1
	I0826 11:38:53.825068  135795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:38:53.825488  135795 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:38:53.825724  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:38:53.864584  135795 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 11:38:53.866187  135795 start.go:297] selected driver: kvm2
	I0826 11:38:53.866238  135795 start.go:901] validating driver "kvm2" against &{Name:multinode-523807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-523807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:38:53.866399  135795 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:38:53.866729  135795 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:38:53.866803  135795 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:38:53.884013  135795 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:38:53.884997  135795 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:38:53.885070  135795 cni.go:84] Creating CNI manager for ""
	I0826 11:38:53.885082  135795 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0826 11:38:53.885158  135795 start.go:340] cluster config:
	{Name:multinode-523807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-523807 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:38:53.885366  135795 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:38:53.887691  135795 out.go:177] * Starting "multinode-523807" primary control-plane node in "multinode-523807" cluster
	I0826 11:38:53.889057  135795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:38:53.889106  135795 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:38:53.889121  135795 cache.go:56] Caching tarball of preloaded images
	I0826 11:38:53.889215  135795 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:38:53.889228  135795 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:38:53.889444  135795 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/config.json ...
	I0826 11:38:53.889719  135795 start.go:360] acquireMachinesLock for multinode-523807: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:38:53.889783  135795 start.go:364] duration metric: took 36.408µs to acquireMachinesLock for "multinode-523807"
	I0826 11:38:53.889801  135795 start.go:96] Skipping create...Using existing machine configuration
	I0826 11:38:53.889813  135795 fix.go:54] fixHost starting: 
	I0826 11:38:53.890210  135795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:38:53.890244  135795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:38:53.905908  135795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I0826 11:38:53.906454  135795 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:38:53.906997  135795 main.go:141] libmachine: Using API Version  1
	I0826 11:38:53.907028  135795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:38:53.907527  135795 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:38:53.907780  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:38:53.908006  135795 main.go:141] libmachine: (multinode-523807) Calling .GetState
	I0826 11:38:53.909639  135795 fix.go:112] recreateIfNeeded on multinode-523807: state=Running err=<nil>
	W0826 11:38:53.909696  135795 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 11:38:53.912498  135795 out.go:177] * Updating the running kvm2 "multinode-523807" VM ...
	I0826 11:38:53.914000  135795 machine.go:93] provisionDockerMachine start ...
	I0826 11:38:53.914036  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:38:53.914397  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:53.917506  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:53.917923  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:53.917951  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:53.918171  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:53.918379  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:53.918559  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:53.918767  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:53.918990  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:38:53.919233  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:38:53.919246  135795 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 11:38:54.042247  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-523807
	
	I0826 11:38:54.042292  135795 main.go:141] libmachine: (multinode-523807) Calling .GetMachineName
	I0826 11:38:54.042593  135795 buildroot.go:166] provisioning hostname "multinode-523807"
	I0826 11:38:54.042625  135795 main.go:141] libmachine: (multinode-523807) Calling .GetMachineName
	I0826 11:38:54.042828  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.046084  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.046517  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.046554  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.046729  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:54.046953  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.047128  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.047266  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:54.047477  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:38:54.047654  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:38:54.047667  135795 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-523807 && echo "multinode-523807" | sudo tee /etc/hostname
	I0826 11:38:54.180282  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-523807
	
	I0826 11:38:54.180322  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.183283  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.183720  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.183758  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.183992  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:54.184186  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.184374  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.184487  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:54.184695  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:38:54.184861  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:38:54.184877  135795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-523807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-523807/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-523807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:38:54.296012  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:38:54.296050  135795 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:38:54.296071  135795 buildroot.go:174] setting up certificates
	I0826 11:38:54.296080  135795 provision.go:84] configureAuth start
	I0826 11:38:54.296089  135795 main.go:141] libmachine: (multinode-523807) Calling .GetMachineName
	I0826 11:38:54.296412  135795 main.go:141] libmachine: (multinode-523807) Calling .GetIP
	I0826 11:38:54.299250  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.299725  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.299761  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.299918  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.302349  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.302716  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.302759  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.302910  135795 provision.go:143] copyHostCerts
	I0826 11:38:54.302951  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:38:54.302983  135795 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:38:54.303000  135795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:38:54.303068  135795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:38:54.303149  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:38:54.303171  135795 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:38:54.303180  135795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:38:54.303215  135795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:38:54.303292  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:38:54.303314  135795 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:38:54.303321  135795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:38:54.303348  135795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:38:54.303401  135795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.multinode-523807 san=[127.0.0.1 192.168.39.26 localhost minikube multinode-523807]
	I0826 11:38:54.439768  135795 provision.go:177] copyRemoteCerts
	I0826 11:38:54.439833  135795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:38:54.439874  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.443122  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.443523  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.443552  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.443803  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:54.444010  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.444119  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:54.444297  135795 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:38:54.528856  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:38:54.528938  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:38:54.555598  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:38:54.555697  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0826 11:38:54.580215  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:38:54.580289  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:38:54.607602  135795 provision.go:87] duration metric: took 311.509229ms to configureAuth
	I0826 11:38:54.607627  135795 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:38:54.607861  135795 config.go:182] Loaded profile config "multinode-523807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:38:54.607945  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.610701  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.611205  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.611235  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.611473  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:54.611695  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.611952  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.612103  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:54.612287  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:38:54.612495  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:38:54.612516  135795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:40:25.310791  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:40:25.310846  135795 machine.go:96] duration metric: took 1m31.396808098s to provisionDockerMachine
	I0826 11:40:25.310863  135795 start.go:293] postStartSetup for "multinode-523807" (driver="kvm2")
	I0826 11:40:25.310879  135795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:40:25.310906  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.311280  135795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:40:25.311317  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:40:25.315043  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.315538  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.315553  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.315783  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:40:25.316088  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.316268  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:40:25.316438  135795 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:40:25.404254  135795 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:40:25.408495  135795 command_runner.go:130] > NAME=Buildroot
	I0826 11:40:25.408520  135795 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0826 11:40:25.408525  135795 command_runner.go:130] > ID=buildroot
	I0826 11:40:25.408530  135795 command_runner.go:130] > VERSION_ID=2023.02.9
	I0826 11:40:25.408535  135795 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0826 11:40:25.408575  135795 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:40:25.408590  135795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:40:25.408672  135795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:40:25.408769  135795 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:40:25.408783  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:40:25.408906  135795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:40:25.421237  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:40:25.447406  135795 start.go:296] duration metric: took 136.52172ms for postStartSetup
	I0826 11:40:25.447463  135795 fix.go:56] duration metric: took 1m31.557649449s for fixHost
	I0826 11:40:25.447511  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:40:25.450761  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.451177  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.451209  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.451366  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:40:25.451585  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.451758  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.451896  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:40:25.452043  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:40:25.452218  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:40:25.452230  135795 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:40:25.563522  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724672425.542126107
	
	I0826 11:40:25.563548  135795 fix.go:216] guest clock: 1724672425.542126107
	I0826 11:40:25.563557  135795 fix.go:229] Guest: 2024-08-26 11:40:25.542126107 +0000 UTC Remote: 2024-08-26 11:40:25.447469459 +0000 UTC m=+91.697446017 (delta=94.656648ms)
	I0826 11:40:25.563585  135795 fix.go:200] guest clock delta is within tolerance: 94.656648ms
	I0826 11:40:25.563592  135795 start.go:83] releasing machines lock for "multinode-523807", held for 1m31.673799983s
	I0826 11:40:25.563619  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.563906  135795 main.go:141] libmachine: (multinode-523807) Calling .GetIP
	I0826 11:40:25.566615  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.567034  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.567059  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.567308  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.567910  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.568110  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.568194  135795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:40:25.568243  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:40:25.568328  135795 ssh_runner.go:195] Run: cat /version.json
	I0826 11:40:25.568345  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:40:25.571226  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.571402  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.571630  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.571654  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.571849  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:40:25.571919  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.571946  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.572056  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.572137  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:40:25.572241  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:40:25.572342  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.572418  135795 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:40:25.572483  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:40:25.572671  135795 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:40:25.687476  135795 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0826 11:40:25.688218  135795 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0826 11:40:25.688391  135795 ssh_runner.go:195] Run: systemctl --version
	I0826 11:40:25.694389  135795 command_runner.go:130] > systemd 252 (252)
	I0826 11:40:25.694450  135795 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0826 11:40:25.694532  135795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:40:25.855079  135795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0826 11:40:25.861577  135795 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0826 11:40:25.861786  135795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:40:25.861849  135795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:40:25.872002  135795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0826 11:40:25.872025  135795 start.go:495] detecting cgroup driver to use...
	I0826 11:40:25.872086  135795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:40:25.890111  135795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:40:25.904522  135795 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:40:25.904591  135795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:40:25.919165  135795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:40:25.933348  135795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:40:26.082174  135795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:40:26.224763  135795 docker.go:233] disabling docker service ...
	I0826 11:40:26.224855  135795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:40:26.241669  135795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:40:26.255339  135795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:40:26.401061  135795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:40:26.553130  135795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:40:26.568599  135795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:40:26.588673  135795 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0826 11:40:26.588933  135795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:40:26.589009  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.600000  135795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:40:26.600072  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.611025  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.622725  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.633984  135795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:40:26.646199  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.657086  135795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.668921  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.680037  135795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:40:26.689996  135795 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0826 11:40:26.690109  135795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:40:26.700063  135795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:40:26.844801  135795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:40:29.839316  135795 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.994470612s)
	I0826 11:40:29.839359  135795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:40:29.839421  135795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:40:29.844326  135795 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0826 11:40:29.844360  135795 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0826 11:40:29.844371  135795 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0826 11:40:29.844381  135795 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0826 11:40:29.844388  135795 command_runner.go:130] > Access: 2024-08-26 11:40:29.692956473 +0000
	I0826 11:40:29.844414  135795 command_runner.go:130] > Modify: 2024-08-26 11:40:29.692956473 +0000
	I0826 11:40:29.844426  135795 command_runner.go:130] > Change: 2024-08-26 11:40:29.692956473 +0000
	I0826 11:40:29.844435  135795 command_runner.go:130] >  Birth: -
	I0826 11:40:29.844471  135795 start.go:563] Will wait 60s for crictl version
	I0826 11:40:29.844532  135795 ssh_runner.go:195] Run: which crictl
	I0826 11:40:29.848652  135795 command_runner.go:130] > /usr/bin/crictl
	I0826 11:40:29.848738  135795 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:40:29.891772  135795 command_runner.go:130] > Version:  0.1.0
	I0826 11:40:29.891796  135795 command_runner.go:130] > RuntimeName:  cri-o
	I0826 11:40:29.891801  135795 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0826 11:40:29.891810  135795 command_runner.go:130] > RuntimeApiVersion:  v1
	I0826 11:40:29.893171  135795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:40:29.893267  135795 ssh_runner.go:195] Run: crio --version
	I0826 11:40:29.922201  135795 command_runner.go:130] > crio version 1.29.1
	I0826 11:40:29.922238  135795 command_runner.go:130] > Version:        1.29.1
	I0826 11:40:29.922244  135795 command_runner.go:130] > GitCommit:      unknown
	I0826 11:40:29.922248  135795 command_runner.go:130] > GitCommitDate:  unknown
	I0826 11:40:29.922252  135795 command_runner.go:130] > GitTreeState:   clean
	I0826 11:40:29.922258  135795 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0826 11:40:29.922262  135795 command_runner.go:130] > GoVersion:      go1.21.6
	I0826 11:40:29.922266  135795 command_runner.go:130] > Compiler:       gc
	I0826 11:40:29.922271  135795 command_runner.go:130] > Platform:       linux/amd64
	I0826 11:40:29.922275  135795 command_runner.go:130] > Linkmode:       dynamic
	I0826 11:40:29.922279  135795 command_runner.go:130] > BuildTags:      
	I0826 11:40:29.922284  135795 command_runner.go:130] >   containers_image_ostree_stub
	I0826 11:40:29.922288  135795 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0826 11:40:29.922291  135795 command_runner.go:130] >   btrfs_noversion
	I0826 11:40:29.922296  135795 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0826 11:40:29.922303  135795 command_runner.go:130] >   libdm_no_deferred_remove
	I0826 11:40:29.922308  135795 command_runner.go:130] >   seccomp
	I0826 11:40:29.922316  135795 command_runner.go:130] > LDFlags:          unknown
	I0826 11:40:29.922321  135795 command_runner.go:130] > SeccompEnabled:   true
	I0826 11:40:29.922329  135795 command_runner.go:130] > AppArmorEnabled:  false
	I0826 11:40:29.923709  135795 ssh_runner.go:195] Run: crio --version
	I0826 11:40:29.952366  135795 command_runner.go:130] > crio version 1.29.1
	I0826 11:40:29.952397  135795 command_runner.go:130] > Version:        1.29.1
	I0826 11:40:29.952403  135795 command_runner.go:130] > GitCommit:      unknown
	I0826 11:40:29.952408  135795 command_runner.go:130] > GitCommitDate:  unknown
	I0826 11:40:29.952411  135795 command_runner.go:130] > GitTreeState:   clean
	I0826 11:40:29.952417  135795 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0826 11:40:29.952421  135795 command_runner.go:130] > GoVersion:      go1.21.6
	I0826 11:40:29.952425  135795 command_runner.go:130] > Compiler:       gc
	I0826 11:40:29.952430  135795 command_runner.go:130] > Platform:       linux/amd64
	I0826 11:40:29.952434  135795 command_runner.go:130] > Linkmode:       dynamic
	I0826 11:40:29.952438  135795 command_runner.go:130] > BuildTags:      
	I0826 11:40:29.952442  135795 command_runner.go:130] >   containers_image_ostree_stub
	I0826 11:40:29.952448  135795 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0826 11:40:29.952454  135795 command_runner.go:130] >   btrfs_noversion
	I0826 11:40:29.952468  135795 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0826 11:40:29.952476  135795 command_runner.go:130] >   libdm_no_deferred_remove
	I0826 11:40:29.952481  135795 command_runner.go:130] >   seccomp
	I0826 11:40:29.952487  135795 command_runner.go:130] > LDFlags:          unknown
	I0826 11:40:29.952491  135795 command_runner.go:130] > SeccompEnabled:   true
	I0826 11:40:29.952496  135795 command_runner.go:130] > AppArmorEnabled:  false
	I0826 11:40:29.955789  135795 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:40:29.957394  135795 main.go:141] libmachine: (multinode-523807) Calling .GetIP
	I0826 11:40:29.960026  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:29.960321  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:29.960352  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:29.960642  135795 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:40:29.965020  135795 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0826 11:40:29.965145  135795 kubeadm.go:883] updating cluster {Name:multinode-523807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-523807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:40:29.965317  135795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:40:29.965378  135795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:40:30.011835  135795 command_runner.go:130] > {
	I0826 11:40:30.011865  135795 command_runner.go:130] >   "images": [
	I0826 11:40:30.011870  135795 command_runner.go:130] >     {
	I0826 11:40:30.011879  135795 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0826 11:40:30.011883  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.011890  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0826 11:40:30.011893  135795 command_runner.go:130] >       ],
	I0826 11:40:30.011897  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.011905  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0826 11:40:30.011912  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0826 11:40:30.011917  135795 command_runner.go:130] >       ],
	I0826 11:40:30.011923  135795 command_runner.go:130] >       "size": "87165492",
	I0826 11:40:30.011930  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.011941  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.011952  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.011961  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.011966  135795 command_runner.go:130] >     },
	I0826 11:40:30.011972  135795 command_runner.go:130] >     {
	I0826 11:40:30.011978  135795 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0826 11:40:30.011982  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.011992  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0826 11:40:30.011995  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012001  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012011  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0826 11:40:30.012026  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0826 11:40:30.012035  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012041  135795 command_runner.go:130] >       "size": "87190579",
	I0826 11:40:30.012049  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.012068  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012078  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012082  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012085  135795 command_runner.go:130] >     },
	I0826 11:40:30.012088  135795 command_runner.go:130] >     {
	I0826 11:40:30.012096  135795 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0826 11:40:30.012102  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012111  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0826 11:40:30.012119  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012127  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012140  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0826 11:40:30.012155  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0826 11:40:30.012163  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012171  135795 command_runner.go:130] >       "size": "1363676",
	I0826 11:40:30.012175  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.012184  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012193  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012204  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012209  135795 command_runner.go:130] >     },
	I0826 11:40:30.012218  135795 command_runner.go:130] >     {
	I0826 11:40:30.012230  135795 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0826 11:40:30.012240  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012251  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0826 11:40:30.012258  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012262  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012276  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0826 11:40:30.012297  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0826 11:40:30.012307  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012314  135795 command_runner.go:130] >       "size": "31470524",
	I0826 11:40:30.012324  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.012330  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012338  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012342  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012350  135795 command_runner.go:130] >     },
	I0826 11:40:30.012354  135795 command_runner.go:130] >     {
	I0826 11:40:30.012368  135795 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0826 11:40:30.012377  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012389  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0826 11:40:30.012397  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012407  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012422  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0826 11:40:30.012433  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0826 11:40:30.012441  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012449  135795 command_runner.go:130] >       "size": "61245718",
	I0826 11:40:30.012458  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.012468  135795 command_runner.go:130] >       "username": "nonroot",
	I0826 11:40:30.012476  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012486  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012494  135795 command_runner.go:130] >     },
	I0826 11:40:30.012503  135795 command_runner.go:130] >     {
	I0826 11:40:30.012511  135795 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0826 11:40:30.012518  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012525  135795 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0826 11:40:30.012533  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012540  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012555  135795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0826 11:40:30.012568  135795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0826 11:40:30.012577  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012583  135795 command_runner.go:130] >       "size": "149009664",
	I0826 11:40:30.012592  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.012596  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.012602  135795 command_runner.go:130] >       },
	I0826 11:40:30.012608  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012622  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012633  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012641  135795 command_runner.go:130] >     },
	I0826 11:40:30.012649  135795 command_runner.go:130] >     {
	I0826 11:40:30.012659  135795 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0826 11:40:30.012669  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012678  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0826 11:40:30.012685  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012689  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012703  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0826 11:40:30.012719  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0826 11:40:30.012728  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012738  135795 command_runner.go:130] >       "size": "95233506",
	I0826 11:40:30.012746  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.012755  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.012761  135795 command_runner.go:130] >       },
	I0826 11:40:30.012769  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012775  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012782  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012790  135795 command_runner.go:130] >     },
	I0826 11:40:30.012796  135795 command_runner.go:130] >     {
	I0826 11:40:30.012809  135795 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0826 11:40:30.012818  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012830  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0826 11:40:30.012839  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012845  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012862  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0826 11:40:30.012877  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0826 11:40:30.012886  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012895  135795 command_runner.go:130] >       "size": "89437512",
	I0826 11:40:30.012905  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.012914  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.012922  135795 command_runner.go:130] >       },
	I0826 11:40:30.012928  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012934  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012940  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012943  135795 command_runner.go:130] >     },
	I0826 11:40:30.012946  135795 command_runner.go:130] >     {
	I0826 11:40:30.012955  135795 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0826 11:40:30.012961  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012969  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0826 11:40:30.012978  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012989  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.013017  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0826 11:40:30.013028  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0826 11:40:30.013036  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013042  135795 command_runner.go:130] >       "size": "92728217",
	I0826 11:40:30.013051  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.013061  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.013071  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.013080  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.013086  135795 command_runner.go:130] >     },
	I0826 11:40:30.013094  135795 command_runner.go:130] >     {
	I0826 11:40:30.013104  135795 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0826 11:40:30.013116  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.013126  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0826 11:40:30.013135  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013145  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.013159  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0826 11:40:30.013174  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0826 11:40:30.013183  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013192  135795 command_runner.go:130] >       "size": "68420936",
	I0826 11:40:30.013199  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.013203  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.013210  135795 command_runner.go:130] >       },
	I0826 11:40:30.013220  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.013229  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.013238  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.013246  135795 command_runner.go:130] >     },
	I0826 11:40:30.013254  135795 command_runner.go:130] >     {
	I0826 11:40:30.013264  135795 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0826 11:40:30.013273  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.013280  135795 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0826 11:40:30.013286  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013291  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.013305  135795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0826 11:40:30.013319  135795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0826 11:40:30.013328  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013340  135795 command_runner.go:130] >       "size": "742080",
	I0826 11:40:30.013348  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.013357  135795 command_runner.go:130] >         "value": "65535"
	I0826 11:40:30.013365  135795 command_runner.go:130] >       },
	I0826 11:40:30.013369  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.013375  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.013382  135795 command_runner.go:130] >       "pinned": true
	I0826 11:40:30.013390  135795 command_runner.go:130] >     }
	I0826 11:40:30.013399  135795 command_runner.go:130] >   ]
	I0826 11:40:30.013407  135795 command_runner.go:130] > }
	I0826 11:40:30.013653  135795 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:40:30.013669  135795 crio.go:433] Images already preloaded, skipping extraction
	I0826 11:40:30.013731  135795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:40:30.049240  135795 command_runner.go:130] > {
	I0826 11:40:30.049271  135795 command_runner.go:130] >   "images": [
	I0826 11:40:30.049275  135795 command_runner.go:130] >     {
	I0826 11:40:30.049283  135795 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0826 11:40:30.049287  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049293  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0826 11:40:30.049296  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049300  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049308  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0826 11:40:30.049321  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0826 11:40:30.049326  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049334  135795 command_runner.go:130] >       "size": "87165492",
	I0826 11:40:30.049341  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049348  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.049358  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049366  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049370  135795 command_runner.go:130] >     },
	I0826 11:40:30.049375  135795 command_runner.go:130] >     {
	I0826 11:40:30.049381  135795 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0826 11:40:30.049388  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049394  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0826 11:40:30.049401  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049413  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049429  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0826 11:40:30.049441  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0826 11:40:30.049450  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049457  135795 command_runner.go:130] >       "size": "87190579",
	I0826 11:40:30.049464  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049470  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.049475  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049479  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049487  135795 command_runner.go:130] >     },
	I0826 11:40:30.049493  135795 command_runner.go:130] >     {
	I0826 11:40:30.049507  135795 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0826 11:40:30.049517  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049528  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0826 11:40:30.049537  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049544  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049558  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0826 11:40:30.049568  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0826 11:40:30.049575  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049599  135795 command_runner.go:130] >       "size": "1363676",
	I0826 11:40:30.049609  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049616  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.049627  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049636  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049641  135795 command_runner.go:130] >     },
	I0826 11:40:30.049648  135795 command_runner.go:130] >     {
	I0826 11:40:30.049656  135795 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0826 11:40:30.049663  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049672  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0826 11:40:30.049682  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049689  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049703  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0826 11:40:30.049723  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0826 11:40:30.049733  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049739  135795 command_runner.go:130] >       "size": "31470524",
	I0826 11:40:30.049745  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049756  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.049764  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049774  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049780  135795 command_runner.go:130] >     },
	I0826 11:40:30.049788  135795 command_runner.go:130] >     {
	I0826 11:40:30.049798  135795 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0826 11:40:30.049808  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049816  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0826 11:40:30.049822  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049828  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049842  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0826 11:40:30.049857  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0826 11:40:30.049865  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049874  135795 command_runner.go:130] >       "size": "61245718",
	I0826 11:40:30.049880  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049889  135795 command_runner.go:130] >       "username": "nonroot",
	I0826 11:40:30.049897  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049903  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049911  135795 command_runner.go:130] >     },
	I0826 11:40:30.049916  135795 command_runner.go:130] >     {
	I0826 11:40:30.049928  135795 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0826 11:40:30.049938  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049946  135795 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0826 11:40:30.049955  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049961  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049974  135795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0826 11:40:30.049987  135795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0826 11:40:30.049994  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049998  135795 command_runner.go:130] >       "size": "149009664",
	I0826 11:40:30.050007  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050014  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.050026  135795 command_runner.go:130] >       },
	I0826 11:40:30.050036  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050042  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050050  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050056  135795 command_runner.go:130] >     },
	I0826 11:40:30.050066  135795 command_runner.go:130] >     {
	I0826 11:40:30.050077  135795 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0826 11:40:30.050084  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050090  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0826 11:40:30.050098  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050105  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050120  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0826 11:40:30.050134  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0826 11:40:30.050142  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050150  135795 command_runner.go:130] >       "size": "95233506",
	I0826 11:40:30.050163  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050168  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.050172  135795 command_runner.go:130] >       },
	I0826 11:40:30.050178  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050185  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050191  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050201  135795 command_runner.go:130] >     },
	I0826 11:40:30.050209  135795 command_runner.go:130] >     {
	I0826 11:40:30.050222  135795 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0826 11:40:30.050231  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050239  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0826 11:40:30.050247  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050253  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050273  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0826 11:40:30.050319  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0826 11:40:30.050336  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050342  135795 command_runner.go:130] >       "size": "89437512",
	I0826 11:40:30.050358  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050364  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.050373  135795 command_runner.go:130] >       },
	I0826 11:40:30.050380  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050391  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050398  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050407  135795 command_runner.go:130] >     },
	I0826 11:40:30.050412  135795 command_runner.go:130] >     {
	I0826 11:40:30.050424  135795 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0826 11:40:30.050435  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050443  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0826 11:40:30.050451  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050458  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050471  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0826 11:40:30.050488  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0826 11:40:30.050497  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050504  135795 command_runner.go:130] >       "size": "92728217",
	I0826 11:40:30.050509  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.050516  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050522  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050531  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050537  135795 command_runner.go:130] >     },
	I0826 11:40:30.050545  135795 command_runner.go:130] >     {
	I0826 11:40:30.050554  135795 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0826 11:40:30.050562  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050569  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0826 11:40:30.050577  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050594  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050609  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0826 11:40:30.050621  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0826 11:40:30.050630  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050638  135795 command_runner.go:130] >       "size": "68420936",
	I0826 11:40:30.050647  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050657  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.050666  135795 command_runner.go:130] >       },
	I0826 11:40:30.050675  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050684  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050690  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050697  135795 command_runner.go:130] >     },
	I0826 11:40:30.050702  135795 command_runner.go:130] >     {
	I0826 11:40:30.050711  135795 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0826 11:40:30.050717  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050722  135795 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0826 11:40:30.050728  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050732  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050743  135795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0826 11:40:30.050752  135795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0826 11:40:30.050758  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050763  135795 command_runner.go:130] >       "size": "742080",
	I0826 11:40:30.050769  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050774  135795 command_runner.go:130] >         "value": "65535"
	I0826 11:40:30.050779  135795 command_runner.go:130] >       },
	I0826 11:40:30.050784  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050789  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050793  135795 command_runner.go:130] >       "pinned": true
	I0826 11:40:30.050799  135795 command_runner.go:130] >     }
	I0826 11:40:30.050803  135795 command_runner.go:130] >   ]
	I0826 11:40:30.050809  135795 command_runner.go:130] > }
	I0826 11:40:30.050955  135795 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:40:30.050968  135795 cache_images.go:84] Images are preloaded, skipping loading
	I0826 11:40:30.050977  135795 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.31.0 crio true true} ...
	I0826 11:40:30.051094  135795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-523807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-523807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:40:30.051169  135795 ssh_runner.go:195] Run: crio config
	I0826 11:40:30.084073  135795 command_runner.go:130] ! time="2024-08-26 11:40:30.062416727Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0826 11:40:30.095225  135795 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0826 11:40:30.100592  135795 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0826 11:40:30.100619  135795 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0826 11:40:30.100626  135795 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0826 11:40:30.100631  135795 command_runner.go:130] > #
	I0826 11:40:30.100639  135795 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0826 11:40:30.100645  135795 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0826 11:40:30.100651  135795 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0826 11:40:30.100660  135795 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0826 11:40:30.100664  135795 command_runner.go:130] > # reload'.
	I0826 11:40:30.100671  135795 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0826 11:40:30.100676  135795 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0826 11:40:30.100682  135795 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0826 11:40:30.100688  135795 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0826 11:40:30.100691  135795 command_runner.go:130] > [crio]
	I0826 11:40:30.100697  135795 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0826 11:40:30.100705  135795 command_runner.go:130] > # containers images, in this directory.
	I0826 11:40:30.100710  135795 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0826 11:40:30.100724  135795 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0826 11:40:30.100737  135795 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0826 11:40:30.100747  135795 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0826 11:40:30.100753  135795 command_runner.go:130] > # imagestore = ""
	I0826 11:40:30.100766  135795 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0826 11:40:30.100779  135795 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0826 11:40:30.100789  135795 command_runner.go:130] > storage_driver = "overlay"
	I0826 11:40:30.100798  135795 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0826 11:40:30.100809  135795 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0826 11:40:30.100831  135795 command_runner.go:130] > storage_option = [
	I0826 11:40:30.100842  135795 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0826 11:40:30.100855  135795 command_runner.go:130] > ]
	I0826 11:40:30.100863  135795 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0826 11:40:30.100871  135795 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0826 11:40:30.100876  135795 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0826 11:40:30.100883  135795 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0826 11:40:30.100891  135795 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0826 11:40:30.100896  135795 command_runner.go:130] > # always happen on a node reboot
	I0826 11:40:30.100901  135795 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0826 11:40:30.100913  135795 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0826 11:40:30.100925  135795 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0826 11:40:30.100935  135795 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0826 11:40:30.100946  135795 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0826 11:40:30.100959  135795 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0826 11:40:30.100969  135795 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0826 11:40:30.100974  135795 command_runner.go:130] > # internal_wipe = true
	I0826 11:40:30.100983  135795 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0826 11:40:30.100991  135795 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0826 11:40:30.100995  135795 command_runner.go:130] > # internal_repair = false
	I0826 11:40:30.101002  135795 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0826 11:40:30.101011  135795 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0826 11:40:30.101020  135795 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0826 11:40:30.101029  135795 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0826 11:40:30.101038  135795 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0826 11:40:30.101044  135795 command_runner.go:130] > [crio.api]
	I0826 11:40:30.101053  135795 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0826 11:40:30.101061  135795 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0826 11:40:30.101068  135795 command_runner.go:130] > # IP address on which the stream server will listen.
	I0826 11:40:30.101072  135795 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0826 11:40:30.101078  135795 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0826 11:40:30.101088  135795 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0826 11:40:30.101096  135795 command_runner.go:130] > # stream_port = "0"
	I0826 11:40:30.101109  135795 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0826 11:40:30.101118  135795 command_runner.go:130] > # stream_enable_tls = false
	I0826 11:40:30.101130  135795 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0826 11:40:30.101138  135795 command_runner.go:130] > # stream_idle_timeout = ""
	I0826 11:40:30.101155  135795 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0826 11:40:30.101164  135795 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0826 11:40:30.101168  135795 command_runner.go:130] > # minutes.
	I0826 11:40:30.101177  135795 command_runner.go:130] > # stream_tls_cert = ""
	I0826 11:40:30.101187  135795 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0826 11:40:30.101200  135795 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0826 11:40:30.101209  135795 command_runner.go:130] > # stream_tls_key = ""
	I0826 11:40:30.101218  135795 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0826 11:40:30.101231  135795 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0826 11:40:30.101261  135795 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0826 11:40:30.101270  135795 command_runner.go:130] > # stream_tls_ca = ""
	I0826 11:40:30.101282  135795 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0826 11:40:30.101293  135795 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0826 11:40:30.101305  135795 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0826 11:40:30.101315  135795 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0826 11:40:30.101328  135795 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0826 11:40:30.101338  135795 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0826 11:40:30.101344  135795 command_runner.go:130] > [crio.runtime]
	I0826 11:40:30.101353  135795 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0826 11:40:30.101366  135795 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0826 11:40:30.101375  135795 command_runner.go:130] > # "nofile=1024:2048"
	I0826 11:40:30.101385  135795 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0826 11:40:30.101394  135795 command_runner.go:130] > # default_ulimits = [
	I0826 11:40:30.101399  135795 command_runner.go:130] > # ]
	I0826 11:40:30.101410  135795 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0826 11:40:30.101418  135795 command_runner.go:130] > # no_pivot = false
	I0826 11:40:30.101425  135795 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0826 11:40:30.101437  135795 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0826 11:40:30.101448  135795 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0826 11:40:30.101460  135795 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0826 11:40:30.101467  135795 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0826 11:40:30.101479  135795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0826 11:40:30.101489  135795 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0826 11:40:30.101524  135795 command_runner.go:130] > # Cgroup setting for conmon
	I0826 11:40:30.101545  135795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0826 11:40:30.101552  135795 command_runner.go:130] > conmon_cgroup = "pod"
	I0826 11:40:30.101565  135795 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0826 11:40:30.101576  135795 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0826 11:40:30.101591  135795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0826 11:40:30.101598  135795 command_runner.go:130] > conmon_env = [
	I0826 11:40:30.101606  135795 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0826 11:40:30.101614  135795 command_runner.go:130] > ]
	I0826 11:40:30.101623  135795 command_runner.go:130] > # Additional environment variables to set for all the
	I0826 11:40:30.101634  135795 command_runner.go:130] > # containers. These are overridden if set in the
	I0826 11:40:30.101646  135795 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0826 11:40:30.101655  135795 command_runner.go:130] > # default_env = [
	I0826 11:40:30.101661  135795 command_runner.go:130] > # ]
	I0826 11:40:30.101672  135795 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0826 11:40:30.101686  135795 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0826 11:40:30.101691  135795 command_runner.go:130] > # selinux = false
	I0826 11:40:30.101701  135795 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0826 11:40:30.101713  135795 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0826 11:40:30.101728  135795 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0826 11:40:30.101738  135795 command_runner.go:130] > # seccomp_profile = ""
	I0826 11:40:30.101747  135795 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0826 11:40:30.101758  135795 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0826 11:40:30.101771  135795 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0826 11:40:30.101778  135795 command_runner.go:130] > # which might increase security.
	I0826 11:40:30.101783  135795 command_runner.go:130] > # This option is currently deprecated,
	I0826 11:40:30.101792  135795 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0826 11:40:30.101802  135795 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0826 11:40:30.101812  135795 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0826 11:40:30.101826  135795 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0826 11:40:30.101837  135795 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0826 11:40:30.101859  135795 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0826 11:40:30.101867  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.101872  135795 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0826 11:40:30.101883  135795 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0826 11:40:30.101891  135795 command_runner.go:130] > # the cgroup blockio controller.
	I0826 11:40:30.101901  135795 command_runner.go:130] > # blockio_config_file = ""
	I0826 11:40:30.101911  135795 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0826 11:40:30.101920  135795 command_runner.go:130] > # blockio parameters.
	I0826 11:40:30.101927  135795 command_runner.go:130] > # blockio_reload = false
	I0826 11:40:30.101941  135795 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0826 11:40:30.101949  135795 command_runner.go:130] > # irqbalance daemon.
	I0826 11:40:30.101954  135795 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0826 11:40:30.101969  135795 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0826 11:40:30.101983  135795 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0826 11:40:30.101995  135795 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0826 11:40:30.102007  135795 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0826 11:40:30.102021  135795 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0826 11:40:30.102031  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.102038  135795 command_runner.go:130] > # rdt_config_file = ""
	I0826 11:40:30.102045  135795 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0826 11:40:30.102054  135795 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0826 11:40:30.102079  135795 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0826 11:40:30.102089  135795 command_runner.go:130] > # separate_pull_cgroup = ""
	I0826 11:40:30.102097  135795 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0826 11:40:30.102110  135795 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0826 11:40:30.102118  135795 command_runner.go:130] > # will be added.
	I0826 11:40:30.102123  135795 command_runner.go:130] > # default_capabilities = [
	I0826 11:40:30.102129  135795 command_runner.go:130] > # 	"CHOWN",
	I0826 11:40:30.102136  135795 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0826 11:40:30.102145  135795 command_runner.go:130] > # 	"FSETID",
	I0826 11:40:30.102150  135795 command_runner.go:130] > # 	"FOWNER",
	I0826 11:40:30.102156  135795 command_runner.go:130] > # 	"SETGID",
	I0826 11:40:30.102162  135795 command_runner.go:130] > # 	"SETUID",
	I0826 11:40:30.102168  135795 command_runner.go:130] > # 	"SETPCAP",
	I0826 11:40:30.102174  135795 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0826 11:40:30.102179  135795 command_runner.go:130] > # 	"KILL",
	I0826 11:40:30.102184  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102195  135795 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0826 11:40:30.102207  135795 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0826 11:40:30.102211  135795 command_runner.go:130] > # add_inheritable_capabilities = false
	I0826 11:40:30.102221  135795 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0826 11:40:30.102234  135795 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0826 11:40:30.102242  135795 command_runner.go:130] > default_sysctls = [
	I0826 11:40:30.102251  135795 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0826 11:40:30.102256  135795 command_runner.go:130] > ]
	I0826 11:40:30.102264  135795 command_runner.go:130] > # List of devices on the host that a
	I0826 11:40:30.102274  135795 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0826 11:40:30.102283  135795 command_runner.go:130] > # allowed_devices = [
	I0826 11:40:30.102288  135795 command_runner.go:130] > # 	"/dev/fuse",
	I0826 11:40:30.102292  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102296  135795 command_runner.go:130] > # List of additional devices. specified as
	I0826 11:40:30.102310  135795 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0826 11:40:30.102321  135795 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0826 11:40:30.102333  135795 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0826 11:40:30.102343  135795 command_runner.go:130] > # additional_devices = [
	I0826 11:40:30.102348  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102357  135795 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0826 11:40:30.102364  135795 command_runner.go:130] > # cdi_spec_dirs = [
	I0826 11:40:30.102370  135795 command_runner.go:130] > # 	"/etc/cdi",
	I0826 11:40:30.102375  135795 command_runner.go:130] > # 	"/var/run/cdi",
	I0826 11:40:30.102382  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102391  135795 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0826 11:40:30.102404  135795 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0826 11:40:30.102411  135795 command_runner.go:130] > # Defaults to false.
	I0826 11:40:30.102419  135795 command_runner.go:130] > # device_ownership_from_security_context = false
	I0826 11:40:30.102433  135795 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0826 11:40:30.102442  135795 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0826 11:40:30.102451  135795 command_runner.go:130] > # hooks_dir = [
	I0826 11:40:30.102458  135795 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0826 11:40:30.102464  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102471  135795 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0826 11:40:30.102483  135795 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0826 11:40:30.102495  135795 command_runner.go:130] > # its default mounts from the following two files:
	I0826 11:40:30.102503  135795 command_runner.go:130] > #
	I0826 11:40:30.102513  135795 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0826 11:40:30.102526  135795 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0826 11:40:30.102538  135795 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0826 11:40:30.102543  135795 command_runner.go:130] > #
	I0826 11:40:30.102549  135795 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0826 11:40:30.102556  135795 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0826 11:40:30.102565  135795 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0826 11:40:30.102573  135795 command_runner.go:130] > #      only add mounts it finds in this file.
	I0826 11:40:30.102581  135795 command_runner.go:130] > #
	I0826 11:40:30.102589  135795 command_runner.go:130] > # default_mounts_file = ""
	I0826 11:40:30.102599  135795 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0826 11:40:30.102610  135795 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0826 11:40:30.102619  135795 command_runner.go:130] > pids_limit = 1024
	I0826 11:40:30.102629  135795 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0826 11:40:30.102642  135795 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0826 11:40:30.102651  135795 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0826 11:40:30.102667  135795 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0826 11:40:30.102677  135795 command_runner.go:130] > # log_size_max = -1
	I0826 11:40:30.102688  135795 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0826 11:40:30.102700  135795 command_runner.go:130] > # log_to_journald = false
	I0826 11:40:30.102710  135795 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0826 11:40:30.102719  135795 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0826 11:40:30.102725  135795 command_runner.go:130] > # Path to directory for container attach sockets.
	I0826 11:40:30.102733  135795 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0826 11:40:30.102743  135795 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0826 11:40:30.102754  135795 command_runner.go:130] > # bind_mount_prefix = ""
	I0826 11:40:30.102763  135795 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0826 11:40:30.102770  135795 command_runner.go:130] > # read_only = false
	I0826 11:40:30.102782  135795 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0826 11:40:30.102791  135795 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0826 11:40:30.102799  135795 command_runner.go:130] > # live configuration reload.
	I0826 11:40:30.102804  135795 command_runner.go:130] > # log_level = "info"
	I0826 11:40:30.102810  135795 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0826 11:40:30.102816  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.102822  135795 command_runner.go:130] > # log_filter = ""
	I0826 11:40:30.102853  135795 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0826 11:40:30.102867  135795 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0826 11:40:30.102877  135795 command_runner.go:130] > # separated by comma.
	I0826 11:40:30.102889  135795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0826 11:40:30.102898  135795 command_runner.go:130] > # uid_mappings = ""
	I0826 11:40:30.102907  135795 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0826 11:40:30.102920  135795 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0826 11:40:30.102930  135795 command_runner.go:130] > # separated by comma.
	I0826 11:40:30.102942  135795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0826 11:40:30.102951  135795 command_runner.go:130] > # gid_mappings = ""
	I0826 11:40:30.102960  135795 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0826 11:40:30.102972  135795 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0826 11:40:30.102982  135795 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0826 11:40:30.102991  135795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0826 11:40:30.103000  135795 command_runner.go:130] > # minimum_mappable_uid = -1
	I0826 11:40:30.103011  135795 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0826 11:40:30.103024  135795 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0826 11:40:30.103036  135795 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0826 11:40:30.103050  135795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0826 11:40:30.103063  135795 command_runner.go:130] > # minimum_mappable_gid = -1
	I0826 11:40:30.103071  135795 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0826 11:40:30.103080  135795 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0826 11:40:30.103093  135795 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0826 11:40:30.103102  135795 command_runner.go:130] > # ctr_stop_timeout = 30
	I0826 11:40:30.103114  135795 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0826 11:40:30.103128  135795 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0826 11:40:30.103138  135795 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0826 11:40:30.103148  135795 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0826 11:40:30.103152  135795 command_runner.go:130] > drop_infra_ctr = false
	I0826 11:40:30.103159  135795 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0826 11:40:30.103171  135795 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0826 11:40:30.103186  135795 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0826 11:40:30.103196  135795 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0826 11:40:30.103207  135795 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0826 11:40:30.103219  135795 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0826 11:40:30.103229  135795 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0826 11:40:30.103237  135795 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0826 11:40:30.103241  135795 command_runner.go:130] > # shared_cpuset = ""
	I0826 11:40:30.103250  135795 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0826 11:40:30.103262  135795 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0826 11:40:30.103272  135795 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0826 11:40:30.103286  135795 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0826 11:40:30.103296  135795 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0826 11:40:30.103305  135795 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0826 11:40:30.103317  135795 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0826 11:40:30.103322  135795 command_runner.go:130] > # enable_criu_support = false
	I0826 11:40:30.103327  135795 command_runner.go:130] > # Enable/disable the generation of the container,
	I0826 11:40:30.103339  135795 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0826 11:40:30.103348  135795 command_runner.go:130] > # enable_pod_events = false
	I0826 11:40:30.103358  135795 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0826 11:40:30.103371  135795 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0826 11:40:30.103383  135795 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0826 11:40:30.103393  135795 command_runner.go:130] > # default_runtime = "runc"
	I0826 11:40:30.103401  135795 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0826 11:40:30.103411  135795 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0826 11:40:30.103424  135795 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0826 11:40:30.103439  135795 command_runner.go:130] > # creation as a file is not desired either.
	I0826 11:40:30.103454  135795 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0826 11:40:30.103465  135795 command_runner.go:130] > # the hostname is being managed dynamically.
	I0826 11:40:30.103476  135795 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0826 11:40:30.103481  135795 command_runner.go:130] > # ]
	I0826 11:40:30.103491  135795 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0826 11:40:30.103497  135795 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0826 11:40:30.103508  135795 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0826 11:40:30.103520  135795 command_runner.go:130] > # Each entry in the table should follow the format:
	I0826 11:40:30.103528  135795 command_runner.go:130] > #
	I0826 11:40:30.103536  135795 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0826 11:40:30.103547  135795 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0826 11:40:30.103599  135795 command_runner.go:130] > # runtime_type = "oci"
	I0826 11:40:30.103611  135795 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0826 11:40:30.103619  135795 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0826 11:40:30.103626  135795 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0826 11:40:30.103634  135795 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0826 11:40:30.103643  135795 command_runner.go:130] > # monitor_env = []
	I0826 11:40:30.103651  135795 command_runner.go:130] > # privileged_without_host_devices = false
	I0826 11:40:30.103660  135795 command_runner.go:130] > # allowed_annotations = []
	I0826 11:40:30.103669  135795 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0826 11:40:30.103675  135795 command_runner.go:130] > # Where:
	I0826 11:40:30.103681  135795 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0826 11:40:30.103695  135795 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0826 11:40:30.103708  135795 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0826 11:40:30.103721  135795 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0826 11:40:30.103727  135795 command_runner.go:130] > #   in $PATH.
	I0826 11:40:30.103737  135795 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0826 11:40:30.103748  135795 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0826 11:40:30.103758  135795 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0826 11:40:30.103762  135795 command_runner.go:130] > #   state.
	I0826 11:40:30.103771  135795 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0826 11:40:30.103783  135795 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0826 11:40:30.103796  135795 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0826 11:40:30.103808  135795 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0826 11:40:30.103820  135795 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0826 11:40:30.103830  135795 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0826 11:40:30.103842  135795 command_runner.go:130] > #   The currently recognized values are:
	I0826 11:40:30.103856  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0826 11:40:30.103871  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0826 11:40:30.103884  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0826 11:40:30.103896  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0826 11:40:30.103911  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0826 11:40:30.103923  135795 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0826 11:40:30.103931  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0826 11:40:30.103943  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0826 11:40:30.103956  135795 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0826 11:40:30.103969  135795 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0826 11:40:30.103979  135795 command_runner.go:130] > #   deprecated option "conmon".
	I0826 11:40:30.103989  135795 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0826 11:40:30.104000  135795 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0826 11:40:30.104012  135795 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0826 11:40:30.104020  135795 command_runner.go:130] > #   should be moved to the container's cgroup
	I0826 11:40:30.104029  135795 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0826 11:40:30.104040  135795 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0826 11:40:30.104050  135795 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0826 11:40:30.104061  135795 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0826 11:40:30.104068  135795 command_runner.go:130] > #
	I0826 11:40:30.104074  135795 command_runner.go:130] > # Using the seccomp notifier feature:
	I0826 11:40:30.104082  135795 command_runner.go:130] > #
	I0826 11:40:30.104092  135795 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0826 11:40:30.104102  135795 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0826 11:40:30.104106  135795 command_runner.go:130] > #
	I0826 11:40:30.104115  135795 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0826 11:40:30.104127  135795 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0826 11:40:30.104133  135795 command_runner.go:130] > #
	I0826 11:40:30.104143  135795 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0826 11:40:30.104152  135795 command_runner.go:130] > # feature.
	I0826 11:40:30.104158  135795 command_runner.go:130] > #
	I0826 11:40:30.104170  135795 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0826 11:40:30.104181  135795 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0826 11:40:30.104188  135795 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0826 11:40:30.104200  135795 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0826 11:40:30.104212  135795 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0826 11:40:30.104220  135795 command_runner.go:130] > #
	I0826 11:40:30.104232  135795 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0826 11:40:30.104244  135795 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0826 11:40:30.104252  135795 command_runner.go:130] > #
	I0826 11:40:30.104262  135795 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0826 11:40:30.104271  135795 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0826 11:40:30.104274  135795 command_runner.go:130] > #
	I0826 11:40:30.104282  135795 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0826 11:40:30.104294  135795 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0826 11:40:30.104303  135795 command_runner.go:130] > # limitation.
	I0826 11:40:30.104312  135795 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0826 11:40:30.104322  135795 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0826 11:40:30.104329  135795 command_runner.go:130] > runtime_type = "oci"
	I0826 11:40:30.104337  135795 command_runner.go:130] > runtime_root = "/run/runc"
	I0826 11:40:30.104343  135795 command_runner.go:130] > runtime_config_path = ""
	I0826 11:40:30.104353  135795 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0826 11:40:30.104357  135795 command_runner.go:130] > monitor_cgroup = "pod"
	I0826 11:40:30.104365  135795 command_runner.go:130] > monitor_exec_cgroup = ""
	I0826 11:40:30.104371  135795 command_runner.go:130] > monitor_env = [
	I0826 11:40:30.104384  135795 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0826 11:40:30.104392  135795 command_runner.go:130] > ]
	I0826 11:40:30.104399  135795 command_runner.go:130] > privileged_without_host_devices = false
	I0826 11:40:30.104412  135795 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0826 11:40:30.104423  135795 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0826 11:40:30.104433  135795 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0826 11:40:30.104444  135795 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0826 11:40:30.104457  135795 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0826 11:40:30.104469  135795 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0826 11:40:30.104486  135795 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0826 11:40:30.104501  135795 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0826 11:40:30.104511  135795 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0826 11:40:30.104522  135795 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0826 11:40:30.104526  135795 command_runner.go:130] > # Example:
	I0826 11:40:30.104530  135795 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0826 11:40:30.104536  135795 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0826 11:40:30.104544  135795 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0826 11:40:30.104556  135795 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0826 11:40:30.104563  135795 command_runner.go:130] > # cpuset = 0
	I0826 11:40:30.104569  135795 command_runner.go:130] > # cpushares = "0-1"
	I0826 11:40:30.104578  135795 command_runner.go:130] > # Where:
	I0826 11:40:30.104585  135795 command_runner.go:130] > # The workload name is workload-type.
	I0826 11:40:30.104600  135795 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0826 11:40:30.104610  135795 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0826 11:40:30.104616  135795 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0826 11:40:30.104629  135795 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0826 11:40:30.104642  135795 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0826 11:40:30.104652  135795 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0826 11:40:30.104666  135795 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0826 11:40:30.104676  135795 command_runner.go:130] > # Default value is set to true
	I0826 11:40:30.104684  135795 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0826 11:40:30.104695  135795 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0826 11:40:30.104702  135795 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0826 11:40:30.104707  135795 command_runner.go:130] > # Default value is set to 'false'
	I0826 11:40:30.104717  135795 command_runner.go:130] > # disable_hostport_mapping = false
	I0826 11:40:30.104735  135795 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0826 11:40:30.104744  135795 command_runner.go:130] > #
	I0826 11:40:30.104753  135795 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0826 11:40:30.104766  135795 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0826 11:40:30.104778  135795 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0826 11:40:30.104788  135795 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0826 11:40:30.104794  135795 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0826 11:40:30.104799  135795 command_runner.go:130] > [crio.image]
	I0826 11:40:30.104811  135795 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0826 11:40:30.104822  135795 command_runner.go:130] > # default_transport = "docker://"
	I0826 11:40:30.104834  135795 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0826 11:40:30.104851  135795 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0826 11:40:30.104860  135795 command_runner.go:130] > # global_auth_file = ""
	I0826 11:40:30.104869  135795 command_runner.go:130] > # The image used to instantiate infra containers.
	I0826 11:40:30.104877  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.104883  135795 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0826 11:40:30.104895  135795 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0826 11:40:30.104905  135795 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0826 11:40:30.104916  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.104927  135795 command_runner.go:130] > # pause_image_auth_file = ""
	I0826 11:40:30.104939  135795 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0826 11:40:30.104951  135795 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0826 11:40:30.104960  135795 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0826 11:40:30.104966  135795 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0826 11:40:30.104972  135795 command_runner.go:130] > # pause_command = "/pause"
	I0826 11:40:30.104978  135795 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0826 11:40:30.104986  135795 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0826 11:40:30.104995  135795 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0826 11:40:30.105011  135795 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0826 11:40:30.105023  135795 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0826 11:40:30.105035  135795 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0826 11:40:30.105045  135795 command_runner.go:130] > # pinned_images = [
	I0826 11:40:30.105051  135795 command_runner.go:130] > # ]
	I0826 11:40:30.105063  135795 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0826 11:40:30.105072  135795 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0826 11:40:30.105078  135795 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0826 11:40:30.105086  135795 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0826 11:40:30.105091  135795 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0826 11:40:30.105095  135795 command_runner.go:130] > # signature_policy = ""
	I0826 11:40:30.105101  135795 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0826 11:40:30.105109  135795 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0826 11:40:30.105115  135795 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0826 11:40:30.105124  135795 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0826 11:40:30.105129  135795 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0826 11:40:30.105139  135795 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0826 11:40:30.105153  135795 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0826 11:40:30.105166  135795 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0826 11:40:30.105175  135795 command_runner.go:130] > # changing them here.
	I0826 11:40:30.105182  135795 command_runner.go:130] > # insecure_registries = [
	I0826 11:40:30.105190  135795 command_runner.go:130] > # ]
	I0826 11:40:30.105201  135795 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0826 11:40:30.105210  135795 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0826 11:40:30.105214  135795 command_runner.go:130] > # image_volumes = "mkdir"
	I0826 11:40:30.105219  135795 command_runner.go:130] > # Temporary directory to use for storing big files
	I0826 11:40:30.105225  135795 command_runner.go:130] > # big_files_temporary_dir = ""
	I0826 11:40:30.105235  135795 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0826 11:40:30.105241  135795 command_runner.go:130] > # CNI plugins.
	I0826 11:40:30.105244  135795 command_runner.go:130] > [crio.network]
	I0826 11:40:30.105250  135795 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0826 11:40:30.105257  135795 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0826 11:40:30.105262  135795 command_runner.go:130] > # cni_default_network = ""
	I0826 11:40:30.105269  135795 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0826 11:40:30.105274  135795 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0826 11:40:30.105281  135795 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0826 11:40:30.105285  135795 command_runner.go:130] > # plugin_dirs = [
	I0826 11:40:30.105288  135795 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0826 11:40:30.105292  135795 command_runner.go:130] > # ]
	I0826 11:40:30.105298  135795 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0826 11:40:30.105304  135795 command_runner.go:130] > [crio.metrics]
	I0826 11:40:30.105309  135795 command_runner.go:130] > # Globally enable or disable metrics support.
	I0826 11:40:30.105314  135795 command_runner.go:130] > enable_metrics = true
	I0826 11:40:30.105319  135795 command_runner.go:130] > # Specify enabled metrics collectors.
	I0826 11:40:30.105326  135795 command_runner.go:130] > # Per default all metrics are enabled.
	I0826 11:40:30.105332  135795 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0826 11:40:30.105340  135795 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0826 11:40:30.105345  135795 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0826 11:40:30.105351  135795 command_runner.go:130] > # metrics_collectors = [
	I0826 11:40:30.105355  135795 command_runner.go:130] > # 	"operations",
	I0826 11:40:30.105361  135795 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0826 11:40:30.105370  135795 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0826 11:40:30.105377  135795 command_runner.go:130] > # 	"operations_errors",
	I0826 11:40:30.105386  135795 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0826 11:40:30.105391  135795 command_runner.go:130] > # 	"image_pulls_by_name",
	I0826 11:40:30.105397  135795 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0826 11:40:30.105401  135795 command_runner.go:130] > # 	"image_pulls_failures",
	I0826 11:40:30.105407  135795 command_runner.go:130] > # 	"image_pulls_successes",
	I0826 11:40:30.105412  135795 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0826 11:40:30.105429  135795 command_runner.go:130] > # 	"image_layer_reuse",
	I0826 11:40:30.105434  135795 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0826 11:40:30.105438  135795 command_runner.go:130] > # 	"containers_oom_total",
	I0826 11:40:30.105442  135795 command_runner.go:130] > # 	"containers_oom",
	I0826 11:40:30.105449  135795 command_runner.go:130] > # 	"processes_defunct",
	I0826 11:40:30.105453  135795 command_runner.go:130] > # 	"operations_total",
	I0826 11:40:30.105459  135795 command_runner.go:130] > # 	"operations_latency_seconds",
	I0826 11:40:30.105463  135795 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0826 11:40:30.105467  135795 command_runner.go:130] > # 	"operations_errors_total",
	I0826 11:40:30.105471  135795 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0826 11:40:30.105476  135795 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0826 11:40:30.105482  135795 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0826 11:40:30.105486  135795 command_runner.go:130] > # 	"image_pulls_success_total",
	I0826 11:40:30.105495  135795 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0826 11:40:30.105499  135795 command_runner.go:130] > # 	"containers_oom_count_total",
	I0826 11:40:30.105504  135795 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0826 11:40:30.105509  135795 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0826 11:40:30.105514  135795 command_runner.go:130] > # ]
	I0826 11:40:30.105519  135795 command_runner.go:130] > # The port on which the metrics server will listen.
	I0826 11:40:30.105523  135795 command_runner.go:130] > # metrics_port = 9090
	I0826 11:40:30.105528  135795 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0826 11:40:30.105534  135795 command_runner.go:130] > # metrics_socket = ""
	I0826 11:40:30.105538  135795 command_runner.go:130] > # The certificate for the secure metrics server.
	I0826 11:40:30.105546  135795 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0826 11:40:30.105552  135795 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0826 11:40:30.105559  135795 command_runner.go:130] > # certificate on any modification event.
	I0826 11:40:30.105563  135795 command_runner.go:130] > # metrics_cert = ""
	I0826 11:40:30.105570  135795 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0826 11:40:30.105575  135795 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0826 11:40:30.105581  135795 command_runner.go:130] > # metrics_key = ""
	I0826 11:40:30.105587  135795 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0826 11:40:30.105593  135795 command_runner.go:130] > [crio.tracing]
	I0826 11:40:30.105599  135795 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0826 11:40:30.105609  135795 command_runner.go:130] > # enable_tracing = false
	I0826 11:40:30.105614  135795 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0826 11:40:30.105621  135795 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0826 11:40:30.105629  135795 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0826 11:40:30.105635  135795 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0826 11:40:30.105640  135795 command_runner.go:130] > # CRI-O NRI configuration.
	I0826 11:40:30.105646  135795 command_runner.go:130] > [crio.nri]
	I0826 11:40:30.105651  135795 command_runner.go:130] > # Globally enable or disable NRI.
	I0826 11:40:30.105657  135795 command_runner.go:130] > # enable_nri = false
	I0826 11:40:30.105662  135795 command_runner.go:130] > # NRI socket to listen on.
	I0826 11:40:30.105667  135795 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0826 11:40:30.105673  135795 command_runner.go:130] > # NRI plugin directory to use.
	I0826 11:40:30.105678  135795 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0826 11:40:30.105682  135795 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0826 11:40:30.105689  135795 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0826 11:40:30.105694  135795 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0826 11:40:30.105699  135795 command_runner.go:130] > # nri_disable_connections = false
	I0826 11:40:30.105704  135795 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0826 11:40:30.105711  135795 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0826 11:40:30.105716  135795 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0826 11:40:30.105722  135795 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0826 11:40:30.105728  135795 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0826 11:40:30.105734  135795 command_runner.go:130] > [crio.stats]
	I0826 11:40:30.105741  135795 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0826 11:40:30.105751  135795 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0826 11:40:30.105758  135795 command_runner.go:130] > # stats_collection_period = 0
	I0826 11:40:30.105889  135795 cni.go:84] Creating CNI manager for ""
	I0826 11:40:30.105901  135795 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0826 11:40:30.105910  135795 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:40:30.105933  135795 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-523807 NodeName:multinode-523807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 11:40:30.106058  135795 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-523807"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:40:30.106125  135795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:40:30.116772  135795 command_runner.go:130] > kubeadm
	I0826 11:40:30.116794  135795 command_runner.go:130] > kubectl
	I0826 11:40:30.116798  135795 command_runner.go:130] > kubelet
	I0826 11:40:30.116819  135795 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:40:30.116881  135795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 11:40:30.126362  135795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0826 11:40:30.143009  135795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:40:30.159667  135795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0826 11:40:30.175805  135795 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0826 11:40:30.179928  135795 command_runner.go:130] > 192.168.39.26	control-plane.minikube.internal
	I0826 11:40:30.180018  135795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:40:30.327586  135795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:40:30.346272  135795 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807 for IP: 192.168.39.26
	I0826 11:40:30.346295  135795 certs.go:194] generating shared ca certs ...
	I0826 11:40:30.346313  135795 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:40:30.346453  135795 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:40:30.346489  135795 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:40:30.346498  135795 certs.go:256] generating profile certs ...
	I0826 11:40:30.346572  135795 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/client.key
	I0826 11:40:30.346656  135795 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.key.c759d2c4
	I0826 11:40:30.346691  135795 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.key
	I0826 11:40:30.346702  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:40:30.346716  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:40:30.346728  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:40:30.346741  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:40:30.346753  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:40:30.346767  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:40:30.346779  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:40:30.346793  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:40:30.346897  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:40:30.346935  135795 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:40:30.346945  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:40:30.346970  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:40:30.346998  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:40:30.347019  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:40:30.347057  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:40:30.347087  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.347100  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.347112  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.347787  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:40:30.377058  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:40:30.407237  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:40:30.433177  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:40:30.459426  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 11:40:30.484682  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 11:40:30.509102  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:40:30.533895  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:40:30.558862  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:40:30.583727  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:40:30.608720  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:40:30.633010  135795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:40:30.650998  135795 ssh_runner.go:195] Run: openssl version
	I0826 11:40:30.657215  135795 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0826 11:40:30.657323  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:40:30.668879  135795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.673723  135795 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.673764  135795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.673818  135795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.679946  135795 command_runner.go:130] > b5213941
	I0826 11:40:30.680119  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:40:30.690002  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:40:30.701661  135795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.706974  135795 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.707016  135795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.707078  135795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.713003  135795 command_runner.go:130] > 51391683
	I0826 11:40:30.713107  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:40:30.723045  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:40:30.734778  135795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.739623  135795 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.739660  135795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.739707  135795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.745857  135795 command_runner.go:130] > 3ec20f2e
	I0826 11:40:30.745944  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:40:30.756365  135795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:40:30.761404  135795 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:40:30.761442  135795 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0826 11:40:30.761451  135795 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0826 11:40:30.761460  135795 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0826 11:40:30.761470  135795 command_runner.go:130] > Access: 2024-08-26 11:33:40.392209556 +0000
	I0826 11:40:30.761478  135795 command_runner.go:130] > Modify: 2024-08-26 11:33:40.392209556 +0000
	I0826 11:40:30.761485  135795 command_runner.go:130] > Change: 2024-08-26 11:33:40.392209556 +0000
	I0826 11:40:30.761494  135795 command_runner.go:130] >  Birth: 2024-08-26 11:33:40.392209556 +0000
	I0826 11:40:30.761584  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 11:40:30.767774  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.767888  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 11:40:30.773988  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.774113  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 11:40:30.780128  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.780234  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 11:40:30.786305  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.786421  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 11:40:30.792629  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.792719  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 11:40:30.798872  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.798965  135795 kubeadm.go:392] StartCluster: {Name:multinode-523807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-523807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:40:30.799083  135795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:40:30.799136  135795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:40:30.832866  135795 command_runner.go:130] > c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4
	I0826 11:40:30.832895  135795 command_runner.go:130] > 5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51
	I0826 11:40:30.832903  135795 command_runner.go:130] > de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af
	I0826 11:40:30.832912  135795 command_runner.go:130] > 0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1
	I0826 11:40:30.832920  135795 command_runner.go:130] > 37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5
	I0826 11:40:30.832928  135795 command_runner.go:130] > 33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599
	I0826 11:40:30.832935  135795 command_runner.go:130] > 50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d
	I0826 11:40:30.832943  135795 command_runner.go:130] > 076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470
	I0826 11:40:30.834307  135795 cri.go:89] found id: "c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4"
	I0826 11:40:30.834330  135795 cri.go:89] found id: "5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51"
	I0826 11:40:30.834337  135795 cri.go:89] found id: "de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af"
	I0826 11:40:30.834341  135795 cri.go:89] found id: "0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1"
	I0826 11:40:30.834345  135795 cri.go:89] found id: "37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5"
	I0826 11:40:30.834349  135795 cri.go:89] found id: "33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599"
	I0826 11:40:30.834353  135795 cri.go:89] found id: "50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d"
	I0826 11:40:30.834357  135795 cri.go:89] found id: "076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470"
	I0826 11:40:30.834361  135795 cri.go:89] found id: ""
	I0826 11:40:30.834433  135795 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.913593818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672538913563035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2aaec23f-6504-412b-80b5-50b0ff13e81e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.914660894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7121633b-62d3-4d42-9ba6-9e00ca7013b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.914737981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7121633b-62d3-4d42-9ba6-9e00ca7013b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.915478647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3e29efaf44685db84a8043a827c6f265f8d2d117a70f828b95ee630f332823,PodSandboxId:f69ccf7999b51dfbb2eaf78218b6b8592a6d168bcfc5a83fef835c690927feaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724672471076445349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3,PodSandboxId:a67d34ee793ce1f666652aa6beedf631d4b0f835e53adbc4beb528ee9d519e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724672437494999552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500,PodSandboxId:0239eb4c2a55e0049b12686280780cb11144c8005ed848c54558d420173c0c64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724672437437031346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e6e68fd8c2f5b89d47f315ff3296b9b4817c34234d516baa4f15f24e9337c8,PodSandboxId:b307a467a746d5beefc783eb0551e651831f02ec631188a86f3afe14064f88e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724672437376668721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8,PodSandboxId:9782bb8928fec139d1e8d1b075f49de99ea1139444b115f4542b7ac992f69cbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724672437327860990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a,PodSandboxId:155a6fa8ec21d7a4b8af3a50f6010767700e3334fd03b465ee2f08b00ee6a5c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724672433530301499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb,PodSandboxId:7468960e03094d2be0a8b28ba7f740757d1a1dfc5a2eb2a5a41dec3aa37aa33b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724672433528258488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c,PodSandboxId:03c91b9190e7cfd3823771dc25e44280c46f73443c52dc86a6f9e1e72ee69399,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724672433434765265,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f,PodSandboxId:38902244e0365e6722d9c5929255741afa58fe57489a60d02317dbf89b96b356,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724672433396555658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e520370eee714aa7a518d55733315dcd9f005c58b0a4dab2ef0ddb0267744,PodSandboxId:1837276db9d34118447c719b5cc4e1e149a94fad8d345c6892b4b57140625b04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724672106886046571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4,PodSandboxId:84d0e1515e4f69de62085bdc61cd4ddb01b1c963f9138b977a8c9e483a133a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724672050187163123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51,PodSandboxId:a30d77461961207849ea0559673ad52d86e2ad731b4b38b89e5414601db1d5d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724672050128935769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af,PodSandboxId:4ff94d658cc3c5c3604896bb63581d49246f218118b98adf0951b56caa05efcb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724672038604898746,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1,PodSandboxId:934780c4abd0f40d86545ba3d361af864881111d3af43dbbc1463145386cd5f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724672034831064703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5,PodSandboxId:814e589a28c700b14d2917a760b41b2f90df114e47124f7476133f75236639ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724672024247378652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599,PodSandboxId:36d88d25591a2fbdac92f4897801691d8911eff6b6529d1312738b47dd6c0ba6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724672024180375602,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d,PodSandboxId:5d9702f9956d3c081ada07b457339fbe61a909672bf059370f558ee422ab739a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724672024161227902,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470,PodSandboxId:11e17415ffccfac6181a146766cdc24bd364f0e69cf0fcc1b04d5d7233f4bb65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724672024101428873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7121633b-62d3-4d42-9ba6-9e00ca7013b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.960966423Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e89075b-129b-4406-8a34-a6cc7d07dfb5 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.961067313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e89075b-129b-4406-8a34-a6cc7d07dfb5 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.962539567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0cd455c1-6c0c-4b7c-b1e0-09db91c0c669 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.963156645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672538963064418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cd455c1-6c0c-4b7c-b1e0-09db91c0c669 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.963831884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fb97f32-1539-4141-bb81-ddbba25e3d15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.964008770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fb97f32-1539-4141-bb81-ddbba25e3d15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:18 multinode-523807 crio[2752]: time="2024-08-26 11:42:18.964978123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3e29efaf44685db84a8043a827c6f265f8d2d117a70f828b95ee630f332823,PodSandboxId:f69ccf7999b51dfbb2eaf78218b6b8592a6d168bcfc5a83fef835c690927feaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724672471076445349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3,PodSandboxId:a67d34ee793ce1f666652aa6beedf631d4b0f835e53adbc4beb528ee9d519e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724672437494999552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500,PodSandboxId:0239eb4c2a55e0049b12686280780cb11144c8005ed848c54558d420173c0c64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724672437437031346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e6e68fd8c2f5b89d47f315ff3296b9b4817c34234d516baa4f15f24e9337c8,PodSandboxId:b307a467a746d5beefc783eb0551e651831f02ec631188a86f3afe14064f88e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724672437376668721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8,PodSandboxId:9782bb8928fec139d1e8d1b075f49de99ea1139444b115f4542b7ac992f69cbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724672437327860990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a,PodSandboxId:155a6fa8ec21d7a4b8af3a50f6010767700e3334fd03b465ee2f08b00ee6a5c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724672433530301499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb,PodSandboxId:7468960e03094d2be0a8b28ba7f740757d1a1dfc5a2eb2a5a41dec3aa37aa33b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724672433528258488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c,PodSandboxId:03c91b9190e7cfd3823771dc25e44280c46f73443c52dc86a6f9e1e72ee69399,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724672433434765265,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f,PodSandboxId:38902244e0365e6722d9c5929255741afa58fe57489a60d02317dbf89b96b356,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724672433396555658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e520370eee714aa7a518d55733315dcd9f005c58b0a4dab2ef0ddb0267744,PodSandboxId:1837276db9d34118447c719b5cc4e1e149a94fad8d345c6892b4b57140625b04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724672106886046571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4,PodSandboxId:84d0e1515e4f69de62085bdc61cd4ddb01b1c963f9138b977a8c9e483a133a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724672050187163123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51,PodSandboxId:a30d77461961207849ea0559673ad52d86e2ad731b4b38b89e5414601db1d5d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724672050128935769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af,PodSandboxId:4ff94d658cc3c5c3604896bb63581d49246f218118b98adf0951b56caa05efcb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724672038604898746,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1,PodSandboxId:934780c4abd0f40d86545ba3d361af864881111d3af43dbbc1463145386cd5f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724672034831064703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5,PodSandboxId:814e589a28c700b14d2917a760b41b2f90df114e47124f7476133f75236639ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724672024247378652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599,PodSandboxId:36d88d25591a2fbdac92f4897801691d8911eff6b6529d1312738b47dd6c0ba6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724672024180375602,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d,PodSandboxId:5d9702f9956d3c081ada07b457339fbe61a909672bf059370f558ee422ab739a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724672024161227902,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470,PodSandboxId:11e17415ffccfac6181a146766cdc24bd364f0e69cf0fcc1b04d5d7233f4bb65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724672024101428873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fb97f32-1539-4141-bb81-ddbba25e3d15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.011679048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37537226-06a4-4e90-8c31-d51c9684f43b name=/runtime.v1.RuntimeService/Version
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.011773790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37537226-06a4-4e90-8c31-d51c9684f43b name=/runtime.v1.RuntimeService/Version
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.013038070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67c0e094-b531-4773-919c-a913f94d67b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.013499600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672539013473939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67c0e094-b531-4773-919c-a913f94d67b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.014073691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13335556-96a9-4efc-aeb9-336b112d860e name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.014192186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13335556-96a9-4efc-aeb9-336b112d860e name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.014513180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3e29efaf44685db84a8043a827c6f265f8d2d117a70f828b95ee630f332823,PodSandboxId:f69ccf7999b51dfbb2eaf78218b6b8592a6d168bcfc5a83fef835c690927feaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724672471076445349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3,PodSandboxId:a67d34ee793ce1f666652aa6beedf631d4b0f835e53adbc4beb528ee9d519e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724672437494999552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500,PodSandboxId:0239eb4c2a55e0049b12686280780cb11144c8005ed848c54558d420173c0c64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724672437437031346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e6e68fd8c2f5b89d47f315ff3296b9b4817c34234d516baa4f15f24e9337c8,PodSandboxId:b307a467a746d5beefc783eb0551e651831f02ec631188a86f3afe14064f88e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724672437376668721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8,PodSandboxId:9782bb8928fec139d1e8d1b075f49de99ea1139444b115f4542b7ac992f69cbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724672437327860990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a,PodSandboxId:155a6fa8ec21d7a4b8af3a50f6010767700e3334fd03b465ee2f08b00ee6a5c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724672433530301499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb,PodSandboxId:7468960e03094d2be0a8b28ba7f740757d1a1dfc5a2eb2a5a41dec3aa37aa33b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724672433528258488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c,PodSandboxId:03c91b9190e7cfd3823771dc25e44280c46f73443c52dc86a6f9e1e72ee69399,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724672433434765265,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f,PodSandboxId:38902244e0365e6722d9c5929255741afa58fe57489a60d02317dbf89b96b356,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724672433396555658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e520370eee714aa7a518d55733315dcd9f005c58b0a4dab2ef0ddb0267744,PodSandboxId:1837276db9d34118447c719b5cc4e1e149a94fad8d345c6892b4b57140625b04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724672106886046571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4,PodSandboxId:84d0e1515e4f69de62085bdc61cd4ddb01b1c963f9138b977a8c9e483a133a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724672050187163123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51,PodSandboxId:a30d77461961207849ea0559673ad52d86e2ad731b4b38b89e5414601db1d5d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724672050128935769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af,PodSandboxId:4ff94d658cc3c5c3604896bb63581d49246f218118b98adf0951b56caa05efcb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724672038604898746,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1,PodSandboxId:934780c4abd0f40d86545ba3d361af864881111d3af43dbbc1463145386cd5f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724672034831064703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5,PodSandboxId:814e589a28c700b14d2917a760b41b2f90df114e47124f7476133f75236639ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724672024247378652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599,PodSandboxId:36d88d25591a2fbdac92f4897801691d8911eff6b6529d1312738b47dd6c0ba6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724672024180375602,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d,PodSandboxId:5d9702f9956d3c081ada07b457339fbe61a909672bf059370f558ee422ab739a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724672024161227902,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470,PodSandboxId:11e17415ffccfac6181a146766cdc24bd364f0e69cf0fcc1b04d5d7233f4bb65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724672024101428873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13335556-96a9-4efc-aeb9-336b112d860e name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.059840207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1ed756b-c3a1-43e0-9728-6f38793f05f2 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.059921187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1ed756b-c3a1-43e0-9728-6f38793f05f2 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.061548870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=355c6143-eb5a-495f-b52e-da12b96c393c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.062599170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672539062573196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=355c6143-eb5a-495f-b52e-da12b96c393c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.063371959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb82ee8a-694b-4e27-a8d3-09f0c7c67d28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.063471512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb82ee8a-694b-4e27-a8d3-09f0c7c67d28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:42:19 multinode-523807 crio[2752]: time="2024-08-26 11:42:19.063891645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3e29efaf44685db84a8043a827c6f265f8d2d117a70f828b95ee630f332823,PodSandboxId:f69ccf7999b51dfbb2eaf78218b6b8592a6d168bcfc5a83fef835c690927feaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724672471076445349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3,PodSandboxId:a67d34ee793ce1f666652aa6beedf631d4b0f835e53adbc4beb528ee9d519e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724672437494999552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500,PodSandboxId:0239eb4c2a55e0049b12686280780cb11144c8005ed848c54558d420173c0c64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724672437437031346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e6e68fd8c2f5b89d47f315ff3296b9b4817c34234d516baa4f15f24e9337c8,PodSandboxId:b307a467a746d5beefc783eb0551e651831f02ec631188a86f3afe14064f88e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724672437376668721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8,PodSandboxId:9782bb8928fec139d1e8d1b075f49de99ea1139444b115f4542b7ac992f69cbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724672437327860990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a,PodSandboxId:155a6fa8ec21d7a4b8af3a50f6010767700e3334fd03b465ee2f08b00ee6a5c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724672433530301499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb,PodSandboxId:7468960e03094d2be0a8b28ba7f740757d1a1dfc5a2eb2a5a41dec3aa37aa33b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724672433528258488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c,PodSandboxId:03c91b9190e7cfd3823771dc25e44280c46f73443c52dc86a6f9e1e72ee69399,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724672433434765265,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f,PodSandboxId:38902244e0365e6722d9c5929255741afa58fe57489a60d02317dbf89b96b356,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724672433396555658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e520370eee714aa7a518d55733315dcd9f005c58b0a4dab2ef0ddb0267744,PodSandboxId:1837276db9d34118447c719b5cc4e1e149a94fad8d345c6892b4b57140625b04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724672106886046571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4,PodSandboxId:84d0e1515e4f69de62085bdc61cd4ddb01b1c963f9138b977a8c9e483a133a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724672050187163123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51,PodSandboxId:a30d77461961207849ea0559673ad52d86e2ad731b4b38b89e5414601db1d5d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724672050128935769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af,PodSandboxId:4ff94d658cc3c5c3604896bb63581d49246f218118b98adf0951b56caa05efcb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724672038604898746,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1,PodSandboxId:934780c4abd0f40d86545ba3d361af864881111d3af43dbbc1463145386cd5f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724672034831064703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5,PodSandboxId:814e589a28c700b14d2917a760b41b2f90df114e47124f7476133f75236639ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724672024247378652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599,PodSandboxId:36d88d25591a2fbdac92f4897801691d8911eff6b6529d1312738b47dd6c0ba6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724672024180375602,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d,PodSandboxId:5d9702f9956d3c081ada07b457339fbe61a909672bf059370f558ee422ab739a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724672024161227902,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470,PodSandboxId:11e17415ffccfac6181a146766cdc24bd364f0e69cf0fcc1b04d5d7233f4bb65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724672024101428873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb82ee8a-694b-4e27-a8d3-09f0c7c67d28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fb3e29efaf446       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   f69ccf7999b51       busybox-7dff88458-9mhm9
	5b1205006e366       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   a67d34ee793ce       kindnet-4s28f
	bf90c23162ca9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   0239eb4c2a55e       coredns-6f6b679f8f-h6q94
	38e6e68fd8c2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b307a467a746d       storage-provisioner
	f42d1d54cf96f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   9782bb8928fec       kube-proxy-9ppdx
	b40b91469f6f2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   155a6fa8ec21d       kube-scheduler-multinode-523807
	7cca73dc65776       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   7468960e03094       etcd-multinode-523807
	8562aeb5a0efc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   03c91b9190e7c       kube-controller-manager-multinode-523807
	9a1e7fd44c56a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   38902244e0365       kube-apiserver-multinode-523807
	174e520370eee       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   1837276db9d34       busybox-7dff88458-9mhm9
	c8e515f3a0923       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   84d0e1515e4f6       coredns-6f6b679f8f-h6q94
	5337003675ddb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   a30d774619612       storage-provisioner
	de944421bc4b9       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   4ff94d658cc3c       kindnet-4s28f
	0e1d877a87d25       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   934780c4abd0f       kube-proxy-9ppdx
	37dbc154c98a1       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   814e589a28c70       kube-scheduler-multinode-523807
	33470455c3b47       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   36d88d25591a2       etcd-multinode-523807
	50ee5bf6f5578       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   5d9702f9956d3       kube-apiserver-multinode-523807
	076c6b1d077f6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   11e17415ffccf       kube-controller-manager-multinode-523807
	
	
	==> coredns [bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44340 - 29004 "HINFO IN 4779146656012115169.2652164064989986983. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012425361s
	
	
	==> coredns [c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4] <==
	[INFO] 10.244.1.2:39829 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001939099s
	[INFO] 10.244.1.2:52993 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088667s
	[INFO] 10.244.1.2:36666 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105643s
	[INFO] 10.244.1.2:55897 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0013678s
	[INFO] 10.244.1.2:37900 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000060202s
	[INFO] 10.244.1.2:46665 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087041s
	[INFO] 10.244.1.2:48703 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061894s
	[INFO] 10.244.0.3:33209 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117078s
	[INFO] 10.244.0.3:46758 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056477s
	[INFO] 10.244.0.3:39695 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050685s
	[INFO] 10.244.0.3:53216 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091954s
	[INFO] 10.244.1.2:42981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157631s
	[INFO] 10.244.1.2:55066 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158605s
	[INFO] 10.244.1.2:34567 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011432s
	[INFO] 10.244.1.2:50043 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007847s
	[INFO] 10.244.0.3:53946 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100062s
	[INFO] 10.244.0.3:48632 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105501s
	[INFO] 10.244.0.3:43563 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077443s
	[INFO] 10.244.0.3:39482 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008123s
	[INFO] 10.244.1.2:38022 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144892s
	[INFO] 10.244.1.2:45065 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130826s
	[INFO] 10.244.1.2:60856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092068s
	[INFO] 10.244.1.2:53154 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081147s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-523807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-523807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=multinode-523807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_33_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:33:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-523807
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:42:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:40:36 +0000   Mon, 26 Aug 2024 11:33:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:40:36 +0000   Mon, 26 Aug 2024 11:33:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:40:36 +0000   Mon, 26 Aug 2024 11:33:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:40:36 +0000   Mon, 26 Aug 2024 11:34:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    multinode-523807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3aabec31f054fd2915bbab4bb374ee9
	  System UUID:                c3aabec3-1f05-4fd2-915b-bab4bb374ee9
	  Boot ID:                    a941a4b1-20f0-4947-ba1e-78491d4e2453
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9mhm9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 coredns-6f6b679f8f-h6q94                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m25s
	  kube-system                 etcd-multinode-523807                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m30s
	  kube-system                 kindnet-4s28f                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m26s
	  kube-system                 kube-apiserver-multinode-523807             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-controller-manager-multinode-523807    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-proxy-9ppdx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-scheduler-multinode-523807             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m23s                  kube-proxy       
	  Normal  Starting                 101s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m36s (x8 over 8m36s)  kubelet          Node multinode-523807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m36s (x8 over 8m36s)  kubelet          Node multinode-523807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m36s (x7 over 8m36s)  kubelet          Node multinode-523807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m30s                  kubelet          Node multinode-523807 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m30s                  kubelet          Node multinode-523807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s                  kubelet          Node multinode-523807 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m30s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m26s                  node-controller  Node multinode-523807 event: Registered Node multinode-523807 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node multinode-523807 status is now: NodeReady
	  Normal  Starting                 107s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)    kubelet          Node multinode-523807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)    kubelet          Node multinode-523807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)    kubelet          Node multinode-523807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                   node-controller  Node multinode-523807 event: Registered Node multinode-523807 in Controller
	
	
	Name:               multinode-523807-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-523807-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=multinode-523807
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_41_19_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:41:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-523807-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:42:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:41:48 +0000   Mon, 26 Aug 2024 11:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:41:48 +0000   Mon, 26 Aug 2024 11:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:41:48 +0000   Mon, 26 Aug 2024 11:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:41:48 +0000   Mon, 26 Aug 2024 11:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    multinode-523807-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 46bde34fa11942fb905e01011870dca1
	  System UUID:                46bde34f-a119-42fb-905e-01011870dca1
	  Boot ID:                    b65364d4-f9c3-433f-bd49-b7eff1dd8e80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vwpns    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-48gc2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m38s
	  kube-system                 kube-proxy-4v7w6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m33s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m38s (x2 over 7m39s)  kubelet     Node multinode-523807-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s (x2 over 7m39s)  kubelet     Node multinode-523807-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s (x2 over 7m39s)  kubelet     Node multinode-523807-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m18s                  kubelet     Node multinode-523807-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-523807-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-523807-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-523807-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-523807-m02 status is now: NodeReady
	
	
	Name:               multinode-523807-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-523807-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=multinode-523807
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_41_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:41:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-523807-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:42:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:42:16 +0000   Mon, 26 Aug 2024 11:41:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:42:16 +0000   Mon, 26 Aug 2024 11:41:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:42:16 +0000   Mon, 26 Aug 2024 11:41:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:42:16 +0000   Mon, 26 Aug 2024 11:42:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    multinode-523807-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aac0556aaa5491b8abcb0aac94b278d
	  System UUID:                7aac0556-aaa5-491b-8abc-b0aac94b278d
	  Boot ID:                    bea52098-70a5-4b87-b60d-a94a6911f54b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8hw78       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m40s
	  kube-system                 kube-proxy-7tjtx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m35s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m45s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m40s (x2 over 6m40s)  kubelet     Node multinode-523807-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x2 over 6m40s)  kubelet     Node multinode-523807-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x2 over 6m40s)  kubelet     Node multinode-523807-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m21s                  kubelet     Node multinode-523807-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet     Node multinode-523807-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet     Node multinode-523807-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet     Node multinode-523807-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m30s                  kubelet     Node multinode-523807-m03 status is now: NodeReady
	  Normal  Starting                 23s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-523807-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-523807-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-523807-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-523807-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060319] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.172350] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.140476] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.290235] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.984982] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +3.729031] systemd-fstab-generator[886]: Ignoring "noauto" option for root device
	[  +0.054813] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.482907] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.077944] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.206246] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.120884] kauditd_printk_skb: 18 callbacks suppressed
	[Aug26 11:34] kauditd_printk_skb: 69 callbacks suppressed
	[Aug26 11:35] kauditd_printk_skb: 14 callbacks suppressed
	[Aug26 11:40] systemd-fstab-generator[2670]: Ignoring "noauto" option for root device
	[  +0.146686] systemd-fstab-generator[2682]: Ignoring "noauto" option for root device
	[  +0.175042] systemd-fstab-generator[2696]: Ignoring "noauto" option for root device
	[  +0.155608] systemd-fstab-generator[2708]: Ignoring "noauto" option for root device
	[  +0.283634] systemd-fstab-generator[2736]: Ignoring "noauto" option for root device
	[  +3.476818] systemd-fstab-generator[2839]: Ignoring "noauto" option for root device
	[  +2.321577] systemd-fstab-generator[2959]: Ignoring "noauto" option for root device
	[  +0.082546] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.000514] kauditd_printk_skb: 82 callbacks suppressed
	[  +9.496596] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.115004] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[Aug26 11:41] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599] <==
	{"level":"info","ts":"2024-08-26T11:33:44.778545Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T11:33:44.774234Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:33:44.779228Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:33:44.779271Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:33:44.779897Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T11:33:44.780609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.26:2379"}
	{"level":"info","ts":"2024-08-26T11:34:41.131148Z","caller":"traceutil/trace.go:171","msg":"trace[89642543] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"146.30578ms","start":"2024-08-26T11:34:40.984816Z","end":"2024-08-26T11:34:41.131121Z","steps":["trace[89642543] 'process raft request'  (duration: 143.9325ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:35:39.675752Z","caller":"traceutil/trace.go:171","msg":"trace[1771133621] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"243.834037ms","start":"2024-08-26T11:35:39.431894Z","end":"2024-08-26T11:35:39.675728Z","steps":["trace[1771133621] 'process raft request'  (duration: 174.174622ms)","trace[1771133621] 'compare'  (duration: 69.542075ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T11:35:39.676032Z","caller":"traceutil/trace.go:171","msg":"trace[562194986] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"219.407946ms","start":"2024-08-26T11:35:39.456611Z","end":"2024-08-26T11:35:39.676019Z","steps":["trace[562194986] 'read index received'  (duration: 149.445691ms)","trace[562194986] 'applied index is now lower than readState.Index'  (duration: 69.960986ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T11:35:39.676333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.633916ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T11:35:39.677544Z","caller":"traceutil/trace.go:171","msg":"trace[1758875829] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:617; }","duration":"220.921411ms","start":"2024-08-26T11:35:39.456605Z","end":"2024-08-26T11:35:39.677526Z","steps":["trace[1758875829] 'agreement among raft nodes before linearized reading'  (duration: 219.61439ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:35:43.674534Z","caller":"traceutil/trace.go:171","msg":"trace[2052732496] linearizableReadLoop","detail":"{readStateIndex:683; appliedIndex:682; }","duration":"218.136437ms","start":"2024-08-26T11:35:43.456366Z","end":"2024-08-26T11:35:43.674503Z","steps":["trace[2052732496] 'read index received'  (duration: 216.688147ms)","trace[2052732496] 'applied index is now lower than readState.Index'  (duration: 1.447745ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T11:35:43.674672Z","caller":"traceutil/trace.go:171","msg":"trace[635935448] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"258.846467ms","start":"2024-08-26T11:35:43.415811Z","end":"2024-08-26T11:35:43.674657Z","steps":["trace[635935448] 'process raft request'  (duration: 257.342594ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T11:35:43.674734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.354641ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T11:35:43.676052Z","caller":"traceutil/trace.go:171","msg":"trace[58905992] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:648; }","duration":"219.674668ms","start":"2024-08-26T11:35:43.456360Z","end":"2024-08-26T11:35:43.676035Z","steps":["trace[58905992] 'agreement among raft nodes before linearized reading'  (duration: 218.343193ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:38:54.734190Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-26T11:38:54.734313Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-523807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	{"level":"warn","ts":"2024-08-26T11:38:54.734453Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T11:38:54.734580Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T11:38:54.822722Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T11:38:54.822795Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-26T11:38:54.822864Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c9867c1935b8b38d","current-leader-member-id":"c9867c1935b8b38d"}
	{"level":"info","ts":"2024-08-26T11:38:54.825708Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-26T11:38:54.825939Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-26T11:38:54.825986Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-523807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	
	
	==> etcd [7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb] <==
	{"level":"info","ts":"2024-08-26T11:40:33.906656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d switched to configuration voters=(14521430496220066701)"}
	{"level":"info","ts":"2024-08-26T11:40:33.910330Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","added-peer-id":"c9867c1935b8b38d","added-peer-peer-urls":["https://192.168.39.26:2380"]}
	{"level":"info","ts":"2024-08-26T11:40:33.910461Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:40:33.910521Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:40:33.921488Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T11:40:33.921793Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c9867c1935b8b38d","initial-advertise-peer-urls":["https://192.168.39.26:2380"],"listen-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.26:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T11:40:33.921836Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T11:40:33.921981Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-26T11:40:33.922004Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-26T11:40:34.963554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-26T11:40:34.963615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-26T11:40:34.963661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgPreVoteResp from c9867c1935b8b38d at term 2"}
	{"level":"info","ts":"2024-08-26T11:40:34.963679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became candidate at term 3"}
	{"level":"info","ts":"2024-08-26T11:40:34.963708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgVoteResp from c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-08-26T11:40:34.963720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became leader at term 3"}
	{"level":"info","ts":"2024-08-26T11:40:34.963727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9867c1935b8b38d elected leader c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-08-26T11:40:34.969167Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c9867c1935b8b38d","local-member-attributes":"{Name:multinode-523807 ClientURLs:[https://192.168.39.26:2379]}","request-path":"/0/members/c9867c1935b8b38d/attributes","cluster-id":"8cfb77a10e566a07","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T11:40:34.969323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T11:40:34.970576Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T11:40:34.971971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.26:2379"}
	{"level":"info","ts":"2024-08-26T11:40:34.972640Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T11:40:34.972760Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T11:40:34.972785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T11:40:34.973573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T11:40:34.974428Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:42:19 up 9 min,  0 users,  load average: 0.19, 0.21, 0.11
	Linux multinode-523807 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3] <==
	I0826 11:41:38.442741       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:41:48.447351       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:41:48.447502       1 main.go:299] handling current node
	I0826 11:41:48.447530       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:41:48.447548       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:41:48.447722       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:41:48.447747       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:41:58.442015       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:41:58.442252       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:41:58.442458       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:41:58.442512       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.2.0/24] 
	I0826 11:41:58.442632       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:41:58.442677       1 main.go:299] handling current node
	I0826 11:42:08.442143       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:42:08.442292       1 main.go:299] handling current node
	I0826 11:42:08.442334       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:42:08.442366       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:42:08.442565       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:42:08.442610       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.2.0/24] 
	I0826 11:42:18.443281       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:42:18.443330       1 main.go:299] handling current node
	I0826 11:42:18.443354       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:42:18.443362       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:42:18.443533       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:42:18.443561       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af] <==
	I0826 11:38:09.532518       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:19.536305       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:38:19.536340       1 main.go:299] handling current node
	I0826 11:38:19.536361       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:38:19.536368       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:38:19.536525       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:38:19.536533       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:29.540749       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:38:29.540795       1 main.go:299] handling current node
	I0826 11:38:29.540809       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:38:29.540815       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:38:29.540964       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:38:29.540983       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:39.539499       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:38:39.539716       1 main.go:299] handling current node
	I0826 11:38:39.539753       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:38:39.539774       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:38:39.539930       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:38:39.539952       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:49.532332       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:38:49.532407       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:49.532603       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:38:49.532613       1 main.go:299] handling current node
	I0826 11:38:49.532641       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:38:49.532646       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d] <==
	I0826 11:38:54.744568       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0826 11:38:54.744603       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0826 11:38:54.744624       1 controller.go:132] Ending legacy_token_tracking_controller
	I0826 11:38:54.744630       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0826 11:38:54.744657       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0826 11:38:54.744696       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0826 11:38:54.744712       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	E0826 11:38:54.754373       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.754872       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0826 11:38:54.758734       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0826 11:38:54.759241       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0826 11:38:54.759525       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0826 11:38:54.759554       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0826 11:38:54.759579       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0826 11:38:54.759606       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0826 11:38:54.762613       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0826 11:38:54.762671       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0826 11:38:54.763012       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0826 11:38:54.763233       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0826 11:38:54.763901       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0826 11:38:54.768410       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.768819       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.768939       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.769026       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.769306       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f] <==
	I0826 11:40:36.315954       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0826 11:40:36.316891       1 aggregator.go:171] initial CRD sync complete...
	I0826 11:40:36.316932       1 autoregister_controller.go:144] Starting autoregister controller
	I0826 11:40:36.316940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0826 11:40:36.320804       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0826 11:40:36.320871       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0826 11:40:36.361148       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0826 11:40:36.383464       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 11:40:36.383554       1 policy_source.go:224] refreshing policies
	I0826 11:40:36.389047       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0826 11:40:36.389174       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0826 11:40:36.389208       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0826 11:40:36.389263       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 11:40:36.394590       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0826 11:40:36.398067       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0826 11:40:36.420677       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0826 11:40:36.420911       1 cache.go:39] Caches are synced for autoregister controller
	I0826 11:40:37.206902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0826 11:40:38.771132       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0826 11:40:38.909023       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0826 11:40:38.927540       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0826 11:40:39.038294       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0826 11:40:39.050699       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0826 11:40:39.767324       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0826 11:40:39.917045       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470] <==
	I0826 11:36:28.341752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:28.341847       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:36:29.469730       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-523807-m03\" does not exist"
	I0826 11:36:29.469838       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:36:29.484436       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-523807-m03" podCIDRs=["10.244.3.0/24"]
	I0826 11:36:29.484472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:29.487183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:29.494758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:29.882689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:30.217376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:33.405854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:39.805785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:49.174578       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:49.174984       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:36:49.187801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:53.407555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:37:28.427283       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m03"
	I0826 11:37:28.427595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:37:28.446304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:37:28.480404       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.156741ms"
	I0826 11:37:28.480579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.108µs"
	I0826 11:37:33.481498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:37:33.492438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:37:33.497777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:37:43.569931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	
	
	==> kube-controller-manager [8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c] <==
	I0826 11:41:37.650254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:41:37.667598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:41:37.677635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.262µs"
	I0826 11:41:37.693295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.938µs"
	I0826 11:41:39.775627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:41:41.400326       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.71269ms"
	I0826 11:41:41.400958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="97.461µs"
	I0826 11:41:48.913604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:41:55.576819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:55.599448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:55.827763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:55.827827       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:41:56.816953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:41:56.817385       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-523807-m03\" does not exist"
	I0826 11:41:56.832417       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-523807-m03" podCIDRs=["10.244.2.0/24"]
	I0826 11:41:56.832534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:56.834511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:56.837229       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:57.315924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:57.658844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:59.877721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:07.118777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:16.126869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:16.127289       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:42:16.137619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	
	
	==> kube-proxy [0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 11:33:55.287964       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 11:33:55.321564       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	E0826 11:33:55.321630       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 11:33:55.395907       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 11:33:55.395944       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 11:33:55.395971       1 server_linux.go:169] "Using iptables Proxier"
	I0826 11:33:55.407040       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 11:33:55.407598       1 server.go:483] "Version info" version="v1.31.0"
	I0826 11:33:55.407705       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:33:55.411488       1 config.go:197] "Starting service config controller"
	I0826 11:33:55.411533       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 11:33:55.411566       1 config.go:104] "Starting endpoint slice config controller"
	I0826 11:33:55.411570       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 11:33:55.412171       1 config.go:326] "Starting node config controller"
	I0826 11:33:55.412193       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 11:33:55.512570       1 shared_informer.go:320] Caches are synced for service config
	I0826 11:33:55.512639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 11:33:55.512878       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 11:40:37.756250       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 11:40:37.784900       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	E0826 11:40:37.785007       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 11:40:37.828347       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 11:40:37.828409       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 11:40:37.828438       1 server_linux.go:169] "Using iptables Proxier"
	I0826 11:40:37.830839       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 11:40:37.831259       1 server.go:483] "Version info" version="v1.31.0"
	I0826 11:40:37.831287       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:40:37.832710       1 config.go:197] "Starting service config controller"
	I0826 11:40:37.832746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 11:40:37.832764       1 config.go:104] "Starting endpoint slice config controller"
	I0826 11:40:37.832768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 11:40:37.833448       1 config.go:326] "Starting node config controller"
	I0826 11:40:37.833471       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 11:40:37.934121       1 shared_informer.go:320] Caches are synced for node config
	I0826 11:40:37.934160       1 shared_informer.go:320] Caches are synced for service config
	I0826 11:40:37.934169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5] <==
	E0826 11:33:46.647198       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 11:33:46.647495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 11:33:46.647527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.495704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 11:33:47.495823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.549435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 11:33:47.549588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.551216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0826 11:33:47.551333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.567529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 11:33:47.567775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.575934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0826 11:33:47.576058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.782226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0826 11:33:47.782291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.795162       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 11:33:47.795374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.825537       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 11:33:47.825670       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 11:33:47.836605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0826 11:33:47.838260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.958554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 11:33:47.958675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0826 11:33:49.640136       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0826 11:38:54.747302       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a] <==
	I0826 11:40:34.626243       1 serving.go:386] Generated self-signed cert in-memory
	W0826 11:40:36.243455       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0826 11:40:36.243499       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 11:40:36.243558       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0826 11:40:36.243570       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0826 11:40:36.343039       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0826 11:40:36.343207       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:40:36.347531       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0826 11:40:36.348397       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0826 11:40:36.361988       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 11:40:36.351321       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0826 11:40:36.462325       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 11:40:42 multinode-523807 kubelet[2966]: E0826 11:40:42.841196    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672442840621908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:40:47 multinode-523807 kubelet[2966]: I0826 11:40:47.060759    2966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 26 11:40:52 multinode-523807 kubelet[2966]: E0826 11:40:52.842855    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672452842289002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:40:52 multinode-523807 kubelet[2966]: E0826 11:40:52.842903    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672452842289002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:02 multinode-523807 kubelet[2966]: E0826 11:41:02.845657    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672462845017237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:02 multinode-523807 kubelet[2966]: E0826 11:41:02.845829    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672462845017237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:12 multinode-523807 kubelet[2966]: E0826 11:41:12.848314    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672472847843386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:12 multinode-523807 kubelet[2966]: E0826 11:41:12.848437    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672472847843386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:22 multinode-523807 kubelet[2966]: E0826 11:41:22.851166    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672482850656212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:22 multinode-523807 kubelet[2966]: E0826 11:41:22.851675    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672482850656212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:32 multinode-523807 kubelet[2966]: E0826 11:41:32.816743    2966 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 11:41:32 multinode-523807 kubelet[2966]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:41:32 multinode-523807 kubelet[2966]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:41:32 multinode-523807 kubelet[2966]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:41:32 multinode-523807 kubelet[2966]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:41:32 multinode-523807 kubelet[2966]: E0826 11:41:32.853460    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672492853013187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:32 multinode-523807 kubelet[2966]: E0826 11:41:32.853501    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672492853013187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:42 multinode-523807 kubelet[2966]: E0826 11:41:42.855608    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672502855206824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:42 multinode-523807 kubelet[2966]: E0826 11:41:42.856254    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672502855206824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:52 multinode-523807 kubelet[2966]: E0826 11:41:52.858779    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672512858165718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:41:52 multinode-523807 kubelet[2966]: E0826 11:41:52.859442    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672512858165718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:42:02 multinode-523807 kubelet[2966]: E0826 11:42:02.862182    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672522861619594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:42:02 multinode-523807 kubelet[2966]: E0826 11:42:02.862611    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672522861619594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:42:12 multinode-523807 kubelet[2966]: E0826 11:42:12.864437    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672532864047183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:42:12 multinode-523807 kubelet[2966]: E0826 11:42:12.864899    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672532864047183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 11:42:18.616825  136912 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19501-99403/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-523807 -n multinode-523807
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-523807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (328.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-523807 stop: exit status 82 (2m0.476069337s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-523807-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-523807 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status
E0826 11:44:34.329401  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-523807 status: exit status 3 (18.889879868s)

                                                
                                                
-- stdout --
	multinode-523807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-523807-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 11:44:42.159205  137549 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	E0826 11:44:42.159241  137549 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-523807 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-523807 -n multinode-523807
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-523807 logs -n 25: (1.456828381s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m02:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807:/home/docker/cp-test_multinode-523807-m02_multinode-523807.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807 sudo cat                                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m02_multinode-523807.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m02:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03:/home/docker/cp-test_multinode-523807-m02_multinode-523807-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807-m03 sudo cat                                   | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m02_multinode-523807-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp testdata/cp-test.txt                                                | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4218272271/001/cp-test_multinode-523807-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807:/home/docker/cp-test_multinode-523807-m03_multinode-523807.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807 sudo cat                                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m03_multinode-523807.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt                       | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02:/home/docker/cp-test_multinode-523807-m03_multinode-523807-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807-m02 sudo cat                                   | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m03_multinode-523807-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-523807 node stop m03                                                          | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	| node    | multinode-523807 node start                                                             | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-523807                                                                | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC |                     |
	| stop    | -p multinode-523807                                                                     | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC |                     |
	| start   | -p multinode-523807                                                                     | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:38 UTC | 26 Aug 24 11:42 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-523807                                                                | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:42 UTC |                     |
	| node    | multinode-523807 node delete                                                            | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:42 UTC | 26 Aug 24 11:42 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-523807 stop                                                                   | multinode-523807 | jenkins | v1.33.1 | 26 Aug 24 11:42 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 11:38:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 11:38:53.790302  135795 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:38:53.790428  135795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:38:53.790438  135795 out.go:358] Setting ErrFile to fd 2...
	I0826 11:38:53.790442  135795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:38:53.790637  135795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:38:53.791220  135795 out.go:352] Setting JSON to false
	I0826 11:38:53.792234  135795 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4875,"bootTime":1724667459,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:38:53.792299  135795 start.go:139] virtualization: kvm guest
	I0826 11:38:53.794654  135795 out.go:177] * [multinode-523807] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:38:53.797106  135795 notify.go:220] Checking for updates...
	I0826 11:38:53.797127  135795 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:38:53.798967  135795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:38:53.800673  135795 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:38:53.802392  135795 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:38:53.804081  135795 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:38:53.805565  135795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:38:53.807438  135795 config.go:182] Loaded profile config "multinode-523807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:38:53.807564  135795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:38:53.807996  135795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:38:53.808069  135795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:38:53.823890  135795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0826 11:38:53.824378  135795 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:38:53.825044  135795 main.go:141] libmachine: Using API Version  1
	I0826 11:38:53.825068  135795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:38:53.825488  135795 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:38:53.825724  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:38:53.864584  135795 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 11:38:53.866187  135795 start.go:297] selected driver: kvm2
	I0826 11:38:53.866238  135795 start.go:901] validating driver "kvm2" against &{Name:multinode-523807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-523807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:38:53.866399  135795 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:38:53.866729  135795 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:38:53.866803  135795 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:38:53.884013  135795 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:38:53.884997  135795 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:38:53.885070  135795 cni.go:84] Creating CNI manager for ""
	I0826 11:38:53.885082  135795 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0826 11:38:53.885158  135795 start.go:340] cluster config:
	{Name:multinode-523807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-523807 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:38:53.885366  135795 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:38:53.887691  135795 out.go:177] * Starting "multinode-523807" primary control-plane node in "multinode-523807" cluster
	I0826 11:38:53.889057  135795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:38:53.889106  135795 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:38:53.889121  135795 cache.go:56] Caching tarball of preloaded images
	I0826 11:38:53.889215  135795 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:38:53.889228  135795 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:38:53.889444  135795 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/config.json ...
	I0826 11:38:53.889719  135795 start.go:360] acquireMachinesLock for multinode-523807: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:38:53.889783  135795 start.go:364] duration metric: took 36.408µs to acquireMachinesLock for "multinode-523807"
	I0826 11:38:53.889801  135795 start.go:96] Skipping create...Using existing machine configuration
	I0826 11:38:53.889813  135795 fix.go:54] fixHost starting: 
	I0826 11:38:53.890210  135795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:38:53.890244  135795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:38:53.905908  135795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I0826 11:38:53.906454  135795 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:38:53.906997  135795 main.go:141] libmachine: Using API Version  1
	I0826 11:38:53.907028  135795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:38:53.907527  135795 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:38:53.907780  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:38:53.908006  135795 main.go:141] libmachine: (multinode-523807) Calling .GetState
	I0826 11:38:53.909639  135795 fix.go:112] recreateIfNeeded on multinode-523807: state=Running err=<nil>
	W0826 11:38:53.909696  135795 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 11:38:53.912498  135795 out.go:177] * Updating the running kvm2 "multinode-523807" VM ...
	I0826 11:38:53.914000  135795 machine.go:93] provisionDockerMachine start ...
	I0826 11:38:53.914036  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:38:53.914397  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:53.917506  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:53.917923  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:53.917951  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:53.918171  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:53.918379  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:53.918559  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:53.918767  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:53.918990  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:38:53.919233  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:38:53.919246  135795 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 11:38:54.042247  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-523807
	
	I0826 11:38:54.042292  135795 main.go:141] libmachine: (multinode-523807) Calling .GetMachineName
	I0826 11:38:54.042593  135795 buildroot.go:166] provisioning hostname "multinode-523807"
	I0826 11:38:54.042625  135795 main.go:141] libmachine: (multinode-523807) Calling .GetMachineName
	I0826 11:38:54.042828  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.046084  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.046517  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.046554  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.046729  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:54.046953  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.047128  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.047266  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:54.047477  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:38:54.047654  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:38:54.047667  135795 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-523807 && echo "multinode-523807" | sudo tee /etc/hostname
	I0826 11:38:54.180282  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-523807
	
	I0826 11:38:54.180322  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.183283  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.183720  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.183758  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.183992  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:54.184186  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.184374  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.184487  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:54.184695  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:38:54.184861  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:38:54.184877  135795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-523807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-523807/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-523807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:38:54.296012  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:38:54.296050  135795 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:38:54.296071  135795 buildroot.go:174] setting up certificates
	I0826 11:38:54.296080  135795 provision.go:84] configureAuth start
	I0826 11:38:54.296089  135795 main.go:141] libmachine: (multinode-523807) Calling .GetMachineName
	I0826 11:38:54.296412  135795 main.go:141] libmachine: (multinode-523807) Calling .GetIP
	I0826 11:38:54.299250  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.299725  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.299761  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.299918  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.302349  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.302716  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.302759  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.302910  135795 provision.go:143] copyHostCerts
	I0826 11:38:54.302951  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:38:54.302983  135795 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:38:54.303000  135795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:38:54.303068  135795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:38:54.303149  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:38:54.303171  135795 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:38:54.303180  135795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:38:54.303215  135795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:38:54.303292  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:38:54.303314  135795 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:38:54.303321  135795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:38:54.303348  135795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:38:54.303401  135795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.multinode-523807 san=[127.0.0.1 192.168.39.26 localhost minikube multinode-523807]
	I0826 11:38:54.439768  135795 provision.go:177] copyRemoteCerts
	I0826 11:38:54.439833  135795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:38:54.439874  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.443122  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.443523  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.443552  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.443803  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:54.444010  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.444119  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:54.444297  135795 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:38:54.528856  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0826 11:38:54.528938  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:38:54.555598  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0826 11:38:54.555697  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0826 11:38:54.580215  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0826 11:38:54.580289  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:38:54.607602  135795 provision.go:87] duration metric: took 311.509229ms to configureAuth
	I0826 11:38:54.607627  135795 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:38:54.607861  135795 config.go:182] Loaded profile config "multinode-523807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:38:54.607945  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:38:54.610701  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.611205  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:38:54.611235  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:38:54.611473  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:38:54.611695  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.611952  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:38:54.612103  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:38:54.612287  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:38:54.612495  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:38:54.612516  135795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:40:25.310791  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:40:25.310846  135795 machine.go:96] duration metric: took 1m31.396808098s to provisionDockerMachine
	I0826 11:40:25.310863  135795 start.go:293] postStartSetup for "multinode-523807" (driver="kvm2")
	I0826 11:40:25.310879  135795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:40:25.310906  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.311280  135795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:40:25.311317  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:40:25.315043  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.315538  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.315553  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.315783  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:40:25.316088  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.316268  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:40:25.316438  135795 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:40:25.404254  135795 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:40:25.408495  135795 command_runner.go:130] > NAME=Buildroot
	I0826 11:40:25.408520  135795 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0826 11:40:25.408525  135795 command_runner.go:130] > ID=buildroot
	I0826 11:40:25.408530  135795 command_runner.go:130] > VERSION_ID=2023.02.9
	I0826 11:40:25.408535  135795 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0826 11:40:25.408575  135795 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:40:25.408590  135795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:40:25.408672  135795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:40:25.408769  135795 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:40:25.408783  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /etc/ssl/certs/1065982.pem
	I0826 11:40:25.408906  135795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:40:25.421237  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:40:25.447406  135795 start.go:296] duration metric: took 136.52172ms for postStartSetup
	I0826 11:40:25.447463  135795 fix.go:56] duration metric: took 1m31.557649449s for fixHost
	I0826 11:40:25.447511  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:40:25.450761  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.451177  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.451209  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.451366  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:40:25.451585  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.451758  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.451896  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:40:25.452043  135795 main.go:141] libmachine: Using SSH client type: native
	I0826 11:40:25.452218  135795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0826 11:40:25.452230  135795 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:40:25.563522  135795 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724672425.542126107
	
	I0826 11:40:25.563548  135795 fix.go:216] guest clock: 1724672425.542126107
	I0826 11:40:25.563557  135795 fix.go:229] Guest: 2024-08-26 11:40:25.542126107 +0000 UTC Remote: 2024-08-26 11:40:25.447469459 +0000 UTC m=+91.697446017 (delta=94.656648ms)
	I0826 11:40:25.563585  135795 fix.go:200] guest clock delta is within tolerance: 94.656648ms
	I0826 11:40:25.563592  135795 start.go:83] releasing machines lock for "multinode-523807", held for 1m31.673799983s
	I0826 11:40:25.563619  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.563906  135795 main.go:141] libmachine: (multinode-523807) Calling .GetIP
	I0826 11:40:25.566615  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.567034  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.567059  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.567308  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.567910  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.568110  135795 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:40:25.568194  135795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:40:25.568243  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:40:25.568328  135795 ssh_runner.go:195] Run: cat /version.json
	I0826 11:40:25.568345  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:40:25.571226  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.571402  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.571630  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.571654  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.571849  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:40:25.571919  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:25.571946  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:25.572056  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.572137  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:40:25.572241  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:40:25.572342  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:40:25.572418  135795 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:40:25.572483  135795 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:40:25.572671  135795 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:40:25.687476  135795 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0826 11:40:25.688218  135795 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0826 11:40:25.688391  135795 ssh_runner.go:195] Run: systemctl --version
	I0826 11:40:25.694389  135795 command_runner.go:130] > systemd 252 (252)
	I0826 11:40:25.694450  135795 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0826 11:40:25.694532  135795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:40:25.855079  135795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0826 11:40:25.861577  135795 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0826 11:40:25.861786  135795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:40:25.861849  135795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:40:25.872002  135795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0826 11:40:25.872025  135795 start.go:495] detecting cgroup driver to use...
	I0826 11:40:25.872086  135795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:40:25.890111  135795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:40:25.904522  135795 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:40:25.904591  135795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:40:25.919165  135795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:40:25.933348  135795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:40:26.082174  135795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:40:26.224763  135795 docker.go:233] disabling docker service ...
	I0826 11:40:26.224855  135795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:40:26.241669  135795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:40:26.255339  135795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:40:26.401061  135795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:40:26.553130  135795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:40:26.568599  135795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:40:26.588673  135795 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0826 11:40:26.588933  135795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 11:40:26.589009  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.600000  135795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:40:26.600072  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.611025  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.622725  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.633984  135795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:40:26.646199  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.657086  135795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.668921  135795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:40:26.680037  135795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:40:26.689996  135795 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0826 11:40:26.690109  135795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:40:26.700063  135795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:40:26.844801  135795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:40:29.839316  135795 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.994470612s)
	I0826 11:40:29.839359  135795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:40:29.839421  135795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:40:29.844326  135795 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0826 11:40:29.844360  135795 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0826 11:40:29.844371  135795 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0826 11:40:29.844381  135795 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0826 11:40:29.844388  135795 command_runner.go:130] > Access: 2024-08-26 11:40:29.692956473 +0000
	I0826 11:40:29.844414  135795 command_runner.go:130] > Modify: 2024-08-26 11:40:29.692956473 +0000
	I0826 11:40:29.844426  135795 command_runner.go:130] > Change: 2024-08-26 11:40:29.692956473 +0000
	I0826 11:40:29.844435  135795 command_runner.go:130] >  Birth: -
	I0826 11:40:29.844471  135795 start.go:563] Will wait 60s for crictl version
	I0826 11:40:29.844532  135795 ssh_runner.go:195] Run: which crictl
	I0826 11:40:29.848652  135795 command_runner.go:130] > /usr/bin/crictl
	I0826 11:40:29.848738  135795 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:40:29.891772  135795 command_runner.go:130] > Version:  0.1.0
	I0826 11:40:29.891796  135795 command_runner.go:130] > RuntimeName:  cri-o
	I0826 11:40:29.891801  135795 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0826 11:40:29.891810  135795 command_runner.go:130] > RuntimeApiVersion:  v1
	I0826 11:40:29.893171  135795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:40:29.893267  135795 ssh_runner.go:195] Run: crio --version
	I0826 11:40:29.922201  135795 command_runner.go:130] > crio version 1.29.1
	I0826 11:40:29.922238  135795 command_runner.go:130] > Version:        1.29.1
	I0826 11:40:29.922244  135795 command_runner.go:130] > GitCommit:      unknown
	I0826 11:40:29.922248  135795 command_runner.go:130] > GitCommitDate:  unknown
	I0826 11:40:29.922252  135795 command_runner.go:130] > GitTreeState:   clean
	I0826 11:40:29.922258  135795 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0826 11:40:29.922262  135795 command_runner.go:130] > GoVersion:      go1.21.6
	I0826 11:40:29.922266  135795 command_runner.go:130] > Compiler:       gc
	I0826 11:40:29.922271  135795 command_runner.go:130] > Platform:       linux/amd64
	I0826 11:40:29.922275  135795 command_runner.go:130] > Linkmode:       dynamic
	I0826 11:40:29.922279  135795 command_runner.go:130] > BuildTags:      
	I0826 11:40:29.922284  135795 command_runner.go:130] >   containers_image_ostree_stub
	I0826 11:40:29.922288  135795 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0826 11:40:29.922291  135795 command_runner.go:130] >   btrfs_noversion
	I0826 11:40:29.922296  135795 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0826 11:40:29.922303  135795 command_runner.go:130] >   libdm_no_deferred_remove
	I0826 11:40:29.922308  135795 command_runner.go:130] >   seccomp
	I0826 11:40:29.922316  135795 command_runner.go:130] > LDFlags:          unknown
	I0826 11:40:29.922321  135795 command_runner.go:130] > SeccompEnabled:   true
	I0826 11:40:29.922329  135795 command_runner.go:130] > AppArmorEnabled:  false
	I0826 11:40:29.923709  135795 ssh_runner.go:195] Run: crio --version
	I0826 11:40:29.952366  135795 command_runner.go:130] > crio version 1.29.1
	I0826 11:40:29.952397  135795 command_runner.go:130] > Version:        1.29.1
	I0826 11:40:29.952403  135795 command_runner.go:130] > GitCommit:      unknown
	I0826 11:40:29.952408  135795 command_runner.go:130] > GitCommitDate:  unknown
	I0826 11:40:29.952411  135795 command_runner.go:130] > GitTreeState:   clean
	I0826 11:40:29.952417  135795 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0826 11:40:29.952421  135795 command_runner.go:130] > GoVersion:      go1.21.6
	I0826 11:40:29.952425  135795 command_runner.go:130] > Compiler:       gc
	I0826 11:40:29.952430  135795 command_runner.go:130] > Platform:       linux/amd64
	I0826 11:40:29.952434  135795 command_runner.go:130] > Linkmode:       dynamic
	I0826 11:40:29.952438  135795 command_runner.go:130] > BuildTags:      
	I0826 11:40:29.952442  135795 command_runner.go:130] >   containers_image_ostree_stub
	I0826 11:40:29.952448  135795 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0826 11:40:29.952454  135795 command_runner.go:130] >   btrfs_noversion
	I0826 11:40:29.952468  135795 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0826 11:40:29.952476  135795 command_runner.go:130] >   libdm_no_deferred_remove
	I0826 11:40:29.952481  135795 command_runner.go:130] >   seccomp
	I0826 11:40:29.952487  135795 command_runner.go:130] > LDFlags:          unknown
	I0826 11:40:29.952491  135795 command_runner.go:130] > SeccompEnabled:   true
	I0826 11:40:29.952496  135795 command_runner.go:130] > AppArmorEnabled:  false
	I0826 11:40:29.955789  135795 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 11:40:29.957394  135795 main.go:141] libmachine: (multinode-523807) Calling .GetIP
	I0826 11:40:29.960026  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:29.960321  135795 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:40:29.960352  135795 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:40:29.960642  135795 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:40:29.965020  135795 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0826 11:40:29.965145  135795 kubeadm.go:883] updating cluster {Name:multinode-523807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-523807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:40:29.965317  135795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:40:29.965378  135795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:40:30.011835  135795 command_runner.go:130] > {
	I0826 11:40:30.011865  135795 command_runner.go:130] >   "images": [
	I0826 11:40:30.011870  135795 command_runner.go:130] >     {
	I0826 11:40:30.011879  135795 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0826 11:40:30.011883  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.011890  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0826 11:40:30.011893  135795 command_runner.go:130] >       ],
	I0826 11:40:30.011897  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.011905  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0826 11:40:30.011912  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0826 11:40:30.011917  135795 command_runner.go:130] >       ],
	I0826 11:40:30.011923  135795 command_runner.go:130] >       "size": "87165492",
	I0826 11:40:30.011930  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.011941  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.011952  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.011961  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.011966  135795 command_runner.go:130] >     },
	I0826 11:40:30.011972  135795 command_runner.go:130] >     {
	I0826 11:40:30.011978  135795 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0826 11:40:30.011982  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.011992  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0826 11:40:30.011995  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012001  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012011  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0826 11:40:30.012026  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0826 11:40:30.012035  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012041  135795 command_runner.go:130] >       "size": "87190579",
	I0826 11:40:30.012049  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.012068  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012078  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012082  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012085  135795 command_runner.go:130] >     },
	I0826 11:40:30.012088  135795 command_runner.go:130] >     {
	I0826 11:40:30.012096  135795 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0826 11:40:30.012102  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012111  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0826 11:40:30.012119  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012127  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012140  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0826 11:40:30.012155  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0826 11:40:30.012163  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012171  135795 command_runner.go:130] >       "size": "1363676",
	I0826 11:40:30.012175  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.012184  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012193  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012204  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012209  135795 command_runner.go:130] >     },
	I0826 11:40:30.012218  135795 command_runner.go:130] >     {
	I0826 11:40:30.012230  135795 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0826 11:40:30.012240  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012251  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0826 11:40:30.012258  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012262  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012276  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0826 11:40:30.012297  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0826 11:40:30.012307  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012314  135795 command_runner.go:130] >       "size": "31470524",
	I0826 11:40:30.012324  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.012330  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012338  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012342  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012350  135795 command_runner.go:130] >     },
	I0826 11:40:30.012354  135795 command_runner.go:130] >     {
	I0826 11:40:30.012368  135795 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0826 11:40:30.012377  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012389  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0826 11:40:30.012397  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012407  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012422  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0826 11:40:30.012433  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0826 11:40:30.012441  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012449  135795 command_runner.go:130] >       "size": "61245718",
	I0826 11:40:30.012458  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.012468  135795 command_runner.go:130] >       "username": "nonroot",
	I0826 11:40:30.012476  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012486  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012494  135795 command_runner.go:130] >     },
	I0826 11:40:30.012503  135795 command_runner.go:130] >     {
	I0826 11:40:30.012511  135795 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0826 11:40:30.012518  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012525  135795 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0826 11:40:30.012533  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012540  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012555  135795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0826 11:40:30.012568  135795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0826 11:40:30.012577  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012583  135795 command_runner.go:130] >       "size": "149009664",
	I0826 11:40:30.012592  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.012596  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.012602  135795 command_runner.go:130] >       },
	I0826 11:40:30.012608  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012622  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012633  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012641  135795 command_runner.go:130] >     },
	I0826 11:40:30.012649  135795 command_runner.go:130] >     {
	I0826 11:40:30.012659  135795 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0826 11:40:30.012669  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012678  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0826 11:40:30.012685  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012689  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012703  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0826 11:40:30.012719  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0826 11:40:30.012728  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012738  135795 command_runner.go:130] >       "size": "95233506",
	I0826 11:40:30.012746  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.012755  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.012761  135795 command_runner.go:130] >       },
	I0826 11:40:30.012769  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012775  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012782  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012790  135795 command_runner.go:130] >     },
	I0826 11:40:30.012796  135795 command_runner.go:130] >     {
	I0826 11:40:30.012809  135795 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0826 11:40:30.012818  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012830  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0826 11:40:30.012839  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012845  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.012862  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0826 11:40:30.012877  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0826 11:40:30.012886  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012895  135795 command_runner.go:130] >       "size": "89437512",
	I0826 11:40:30.012905  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.012914  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.012922  135795 command_runner.go:130] >       },
	I0826 11:40:30.012928  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.012934  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.012940  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.012943  135795 command_runner.go:130] >     },
	I0826 11:40:30.012946  135795 command_runner.go:130] >     {
	I0826 11:40:30.012955  135795 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0826 11:40:30.012961  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.012969  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0826 11:40:30.012978  135795 command_runner.go:130] >       ],
	I0826 11:40:30.012989  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.013017  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0826 11:40:30.013028  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0826 11:40:30.013036  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013042  135795 command_runner.go:130] >       "size": "92728217",
	I0826 11:40:30.013051  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.013061  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.013071  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.013080  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.013086  135795 command_runner.go:130] >     },
	I0826 11:40:30.013094  135795 command_runner.go:130] >     {
	I0826 11:40:30.013104  135795 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0826 11:40:30.013116  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.013126  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0826 11:40:30.013135  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013145  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.013159  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0826 11:40:30.013174  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0826 11:40:30.013183  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013192  135795 command_runner.go:130] >       "size": "68420936",
	I0826 11:40:30.013199  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.013203  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.013210  135795 command_runner.go:130] >       },
	I0826 11:40:30.013220  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.013229  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.013238  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.013246  135795 command_runner.go:130] >     },
	I0826 11:40:30.013254  135795 command_runner.go:130] >     {
	I0826 11:40:30.013264  135795 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0826 11:40:30.013273  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.013280  135795 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0826 11:40:30.013286  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013291  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.013305  135795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0826 11:40:30.013319  135795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0826 11:40:30.013328  135795 command_runner.go:130] >       ],
	I0826 11:40:30.013340  135795 command_runner.go:130] >       "size": "742080",
	I0826 11:40:30.013348  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.013357  135795 command_runner.go:130] >         "value": "65535"
	I0826 11:40:30.013365  135795 command_runner.go:130] >       },
	I0826 11:40:30.013369  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.013375  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.013382  135795 command_runner.go:130] >       "pinned": true
	I0826 11:40:30.013390  135795 command_runner.go:130] >     }
	I0826 11:40:30.013399  135795 command_runner.go:130] >   ]
	I0826 11:40:30.013407  135795 command_runner.go:130] > }
	I0826 11:40:30.013653  135795 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:40:30.013669  135795 crio.go:433] Images already preloaded, skipping extraction
	I0826 11:40:30.013731  135795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:40:30.049240  135795 command_runner.go:130] > {
	I0826 11:40:30.049271  135795 command_runner.go:130] >   "images": [
	I0826 11:40:30.049275  135795 command_runner.go:130] >     {
	I0826 11:40:30.049283  135795 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0826 11:40:30.049287  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049293  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0826 11:40:30.049296  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049300  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049308  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0826 11:40:30.049321  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0826 11:40:30.049326  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049334  135795 command_runner.go:130] >       "size": "87165492",
	I0826 11:40:30.049341  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049348  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.049358  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049366  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049370  135795 command_runner.go:130] >     },
	I0826 11:40:30.049375  135795 command_runner.go:130] >     {
	I0826 11:40:30.049381  135795 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0826 11:40:30.049388  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049394  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0826 11:40:30.049401  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049413  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049429  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0826 11:40:30.049441  135795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0826 11:40:30.049450  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049457  135795 command_runner.go:130] >       "size": "87190579",
	I0826 11:40:30.049464  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049470  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.049475  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049479  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049487  135795 command_runner.go:130] >     },
	I0826 11:40:30.049493  135795 command_runner.go:130] >     {
	I0826 11:40:30.049507  135795 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0826 11:40:30.049517  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049528  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0826 11:40:30.049537  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049544  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049558  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0826 11:40:30.049568  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0826 11:40:30.049575  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049599  135795 command_runner.go:130] >       "size": "1363676",
	I0826 11:40:30.049609  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049616  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.049627  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049636  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049641  135795 command_runner.go:130] >     },
	I0826 11:40:30.049648  135795 command_runner.go:130] >     {
	I0826 11:40:30.049656  135795 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0826 11:40:30.049663  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049672  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0826 11:40:30.049682  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049689  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049703  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0826 11:40:30.049723  135795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0826 11:40:30.049733  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049739  135795 command_runner.go:130] >       "size": "31470524",
	I0826 11:40:30.049745  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049756  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.049764  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049774  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049780  135795 command_runner.go:130] >     },
	I0826 11:40:30.049788  135795 command_runner.go:130] >     {
	I0826 11:40:30.049798  135795 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0826 11:40:30.049808  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049816  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0826 11:40:30.049822  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049828  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049842  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0826 11:40:30.049857  135795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0826 11:40:30.049865  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049874  135795 command_runner.go:130] >       "size": "61245718",
	I0826 11:40:30.049880  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.049889  135795 command_runner.go:130] >       "username": "nonroot",
	I0826 11:40:30.049897  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.049903  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.049911  135795 command_runner.go:130] >     },
	I0826 11:40:30.049916  135795 command_runner.go:130] >     {
	I0826 11:40:30.049928  135795 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0826 11:40:30.049938  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.049946  135795 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0826 11:40:30.049955  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049961  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.049974  135795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0826 11:40:30.049987  135795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0826 11:40:30.049994  135795 command_runner.go:130] >       ],
	I0826 11:40:30.049998  135795 command_runner.go:130] >       "size": "149009664",
	I0826 11:40:30.050007  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050014  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.050026  135795 command_runner.go:130] >       },
	I0826 11:40:30.050036  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050042  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050050  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050056  135795 command_runner.go:130] >     },
	I0826 11:40:30.050066  135795 command_runner.go:130] >     {
	I0826 11:40:30.050077  135795 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0826 11:40:30.050084  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050090  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0826 11:40:30.050098  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050105  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050120  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0826 11:40:30.050134  135795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0826 11:40:30.050142  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050150  135795 command_runner.go:130] >       "size": "95233506",
	I0826 11:40:30.050163  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050168  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.050172  135795 command_runner.go:130] >       },
	I0826 11:40:30.050178  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050185  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050191  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050201  135795 command_runner.go:130] >     },
	I0826 11:40:30.050209  135795 command_runner.go:130] >     {
	I0826 11:40:30.050222  135795 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0826 11:40:30.050231  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050239  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0826 11:40:30.050247  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050253  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050273  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0826 11:40:30.050319  135795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0826 11:40:30.050336  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050342  135795 command_runner.go:130] >       "size": "89437512",
	I0826 11:40:30.050358  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050364  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.050373  135795 command_runner.go:130] >       },
	I0826 11:40:30.050380  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050391  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050398  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050407  135795 command_runner.go:130] >     },
	I0826 11:40:30.050412  135795 command_runner.go:130] >     {
	I0826 11:40:30.050424  135795 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0826 11:40:30.050435  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050443  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0826 11:40:30.050451  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050458  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050471  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0826 11:40:30.050488  135795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0826 11:40:30.050497  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050504  135795 command_runner.go:130] >       "size": "92728217",
	I0826 11:40:30.050509  135795 command_runner.go:130] >       "uid": null,
	I0826 11:40:30.050516  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050522  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050531  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050537  135795 command_runner.go:130] >     },
	I0826 11:40:30.050545  135795 command_runner.go:130] >     {
	I0826 11:40:30.050554  135795 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0826 11:40:30.050562  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050569  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0826 11:40:30.050577  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050594  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050609  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0826 11:40:30.050621  135795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0826 11:40:30.050630  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050638  135795 command_runner.go:130] >       "size": "68420936",
	I0826 11:40:30.050647  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050657  135795 command_runner.go:130] >         "value": "0"
	I0826 11:40:30.050666  135795 command_runner.go:130] >       },
	I0826 11:40:30.050675  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050684  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050690  135795 command_runner.go:130] >       "pinned": false
	I0826 11:40:30.050697  135795 command_runner.go:130] >     },
	I0826 11:40:30.050702  135795 command_runner.go:130] >     {
	I0826 11:40:30.050711  135795 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0826 11:40:30.050717  135795 command_runner.go:130] >       "repoTags": [
	I0826 11:40:30.050722  135795 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0826 11:40:30.050728  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050732  135795 command_runner.go:130] >       "repoDigests": [
	I0826 11:40:30.050743  135795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0826 11:40:30.050752  135795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0826 11:40:30.050758  135795 command_runner.go:130] >       ],
	I0826 11:40:30.050763  135795 command_runner.go:130] >       "size": "742080",
	I0826 11:40:30.050769  135795 command_runner.go:130] >       "uid": {
	I0826 11:40:30.050774  135795 command_runner.go:130] >         "value": "65535"
	I0826 11:40:30.050779  135795 command_runner.go:130] >       },
	I0826 11:40:30.050784  135795 command_runner.go:130] >       "username": "",
	I0826 11:40:30.050789  135795 command_runner.go:130] >       "spec": null,
	I0826 11:40:30.050793  135795 command_runner.go:130] >       "pinned": true
	I0826 11:40:30.050799  135795 command_runner.go:130] >     }
	I0826 11:40:30.050803  135795 command_runner.go:130] >   ]
	I0826 11:40:30.050809  135795 command_runner.go:130] > }
	I0826 11:40:30.050955  135795 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 11:40:30.050968  135795 cache_images.go:84] Images are preloaded, skipping loading
	I0826 11:40:30.050977  135795 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.31.0 crio true true} ...
	I0826 11:40:30.051094  135795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-523807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-523807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:40:30.051169  135795 ssh_runner.go:195] Run: crio config
	I0826 11:40:30.084073  135795 command_runner.go:130] ! time="2024-08-26 11:40:30.062416727Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0826 11:40:30.095225  135795 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0826 11:40:30.100592  135795 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0826 11:40:30.100619  135795 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0826 11:40:30.100626  135795 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0826 11:40:30.100631  135795 command_runner.go:130] > #
	I0826 11:40:30.100639  135795 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0826 11:40:30.100645  135795 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0826 11:40:30.100651  135795 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0826 11:40:30.100660  135795 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0826 11:40:30.100664  135795 command_runner.go:130] > # reload'.
	I0826 11:40:30.100671  135795 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0826 11:40:30.100676  135795 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0826 11:40:30.100682  135795 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0826 11:40:30.100688  135795 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0826 11:40:30.100691  135795 command_runner.go:130] > [crio]
	I0826 11:40:30.100697  135795 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0826 11:40:30.100705  135795 command_runner.go:130] > # containers images, in this directory.
	I0826 11:40:30.100710  135795 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0826 11:40:30.100724  135795 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0826 11:40:30.100737  135795 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0826 11:40:30.100747  135795 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0826 11:40:30.100753  135795 command_runner.go:130] > # imagestore = ""
	I0826 11:40:30.100766  135795 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0826 11:40:30.100779  135795 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0826 11:40:30.100789  135795 command_runner.go:130] > storage_driver = "overlay"
	I0826 11:40:30.100798  135795 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0826 11:40:30.100809  135795 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0826 11:40:30.100831  135795 command_runner.go:130] > storage_option = [
	I0826 11:40:30.100842  135795 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0826 11:40:30.100855  135795 command_runner.go:130] > ]
	I0826 11:40:30.100863  135795 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0826 11:40:30.100871  135795 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0826 11:40:30.100876  135795 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0826 11:40:30.100883  135795 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0826 11:40:30.100891  135795 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0826 11:40:30.100896  135795 command_runner.go:130] > # always happen on a node reboot
	I0826 11:40:30.100901  135795 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0826 11:40:30.100913  135795 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0826 11:40:30.100925  135795 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0826 11:40:30.100935  135795 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0826 11:40:30.100946  135795 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0826 11:40:30.100959  135795 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0826 11:40:30.100969  135795 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0826 11:40:30.100974  135795 command_runner.go:130] > # internal_wipe = true
	I0826 11:40:30.100983  135795 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0826 11:40:30.100991  135795 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0826 11:40:30.100995  135795 command_runner.go:130] > # internal_repair = false
	I0826 11:40:30.101002  135795 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0826 11:40:30.101011  135795 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0826 11:40:30.101020  135795 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0826 11:40:30.101029  135795 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0826 11:40:30.101038  135795 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0826 11:40:30.101044  135795 command_runner.go:130] > [crio.api]
	I0826 11:40:30.101053  135795 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0826 11:40:30.101061  135795 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0826 11:40:30.101068  135795 command_runner.go:130] > # IP address on which the stream server will listen.
	I0826 11:40:30.101072  135795 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0826 11:40:30.101078  135795 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0826 11:40:30.101088  135795 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0826 11:40:30.101096  135795 command_runner.go:130] > # stream_port = "0"
	I0826 11:40:30.101109  135795 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0826 11:40:30.101118  135795 command_runner.go:130] > # stream_enable_tls = false
	I0826 11:40:30.101130  135795 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0826 11:40:30.101138  135795 command_runner.go:130] > # stream_idle_timeout = ""
	I0826 11:40:30.101155  135795 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0826 11:40:30.101164  135795 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0826 11:40:30.101168  135795 command_runner.go:130] > # minutes.
	I0826 11:40:30.101177  135795 command_runner.go:130] > # stream_tls_cert = ""
	I0826 11:40:30.101187  135795 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0826 11:40:30.101200  135795 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0826 11:40:30.101209  135795 command_runner.go:130] > # stream_tls_key = ""
	I0826 11:40:30.101218  135795 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0826 11:40:30.101231  135795 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0826 11:40:30.101261  135795 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0826 11:40:30.101270  135795 command_runner.go:130] > # stream_tls_ca = ""
	I0826 11:40:30.101282  135795 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0826 11:40:30.101293  135795 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0826 11:40:30.101305  135795 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0826 11:40:30.101315  135795 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0826 11:40:30.101328  135795 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0826 11:40:30.101338  135795 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0826 11:40:30.101344  135795 command_runner.go:130] > [crio.runtime]
	I0826 11:40:30.101353  135795 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0826 11:40:30.101366  135795 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0826 11:40:30.101375  135795 command_runner.go:130] > # "nofile=1024:2048"
	I0826 11:40:30.101385  135795 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0826 11:40:30.101394  135795 command_runner.go:130] > # default_ulimits = [
	I0826 11:40:30.101399  135795 command_runner.go:130] > # ]
	I0826 11:40:30.101410  135795 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0826 11:40:30.101418  135795 command_runner.go:130] > # no_pivot = false
	I0826 11:40:30.101425  135795 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0826 11:40:30.101437  135795 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0826 11:40:30.101448  135795 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0826 11:40:30.101460  135795 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0826 11:40:30.101467  135795 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0826 11:40:30.101479  135795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0826 11:40:30.101489  135795 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0826 11:40:30.101524  135795 command_runner.go:130] > # Cgroup setting for conmon
	I0826 11:40:30.101545  135795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0826 11:40:30.101552  135795 command_runner.go:130] > conmon_cgroup = "pod"
	I0826 11:40:30.101565  135795 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0826 11:40:30.101576  135795 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0826 11:40:30.101591  135795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0826 11:40:30.101598  135795 command_runner.go:130] > conmon_env = [
	I0826 11:40:30.101606  135795 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0826 11:40:30.101614  135795 command_runner.go:130] > ]
	I0826 11:40:30.101623  135795 command_runner.go:130] > # Additional environment variables to set for all the
	I0826 11:40:30.101634  135795 command_runner.go:130] > # containers. These are overridden if set in the
	I0826 11:40:30.101646  135795 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0826 11:40:30.101655  135795 command_runner.go:130] > # default_env = [
	I0826 11:40:30.101661  135795 command_runner.go:130] > # ]
	I0826 11:40:30.101672  135795 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0826 11:40:30.101686  135795 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0826 11:40:30.101691  135795 command_runner.go:130] > # selinux = false
	I0826 11:40:30.101701  135795 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0826 11:40:30.101713  135795 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0826 11:40:30.101728  135795 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0826 11:40:30.101738  135795 command_runner.go:130] > # seccomp_profile = ""
	I0826 11:40:30.101747  135795 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0826 11:40:30.101758  135795 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0826 11:40:30.101771  135795 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0826 11:40:30.101778  135795 command_runner.go:130] > # which might increase security.
	I0826 11:40:30.101783  135795 command_runner.go:130] > # This option is currently deprecated,
	I0826 11:40:30.101792  135795 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0826 11:40:30.101802  135795 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0826 11:40:30.101812  135795 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0826 11:40:30.101826  135795 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0826 11:40:30.101837  135795 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0826 11:40:30.101859  135795 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0826 11:40:30.101867  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.101872  135795 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0826 11:40:30.101883  135795 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0826 11:40:30.101891  135795 command_runner.go:130] > # the cgroup blockio controller.
	I0826 11:40:30.101901  135795 command_runner.go:130] > # blockio_config_file = ""
	I0826 11:40:30.101911  135795 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0826 11:40:30.101920  135795 command_runner.go:130] > # blockio parameters.
	I0826 11:40:30.101927  135795 command_runner.go:130] > # blockio_reload = false
	I0826 11:40:30.101941  135795 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0826 11:40:30.101949  135795 command_runner.go:130] > # irqbalance daemon.
	I0826 11:40:30.101954  135795 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0826 11:40:30.101969  135795 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0826 11:40:30.101983  135795 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0826 11:40:30.101995  135795 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0826 11:40:30.102007  135795 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0826 11:40:30.102021  135795 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0826 11:40:30.102031  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.102038  135795 command_runner.go:130] > # rdt_config_file = ""
	I0826 11:40:30.102045  135795 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0826 11:40:30.102054  135795 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0826 11:40:30.102079  135795 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0826 11:40:30.102089  135795 command_runner.go:130] > # separate_pull_cgroup = ""
	I0826 11:40:30.102097  135795 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0826 11:40:30.102110  135795 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0826 11:40:30.102118  135795 command_runner.go:130] > # will be added.
	I0826 11:40:30.102123  135795 command_runner.go:130] > # default_capabilities = [
	I0826 11:40:30.102129  135795 command_runner.go:130] > # 	"CHOWN",
	I0826 11:40:30.102136  135795 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0826 11:40:30.102145  135795 command_runner.go:130] > # 	"FSETID",
	I0826 11:40:30.102150  135795 command_runner.go:130] > # 	"FOWNER",
	I0826 11:40:30.102156  135795 command_runner.go:130] > # 	"SETGID",
	I0826 11:40:30.102162  135795 command_runner.go:130] > # 	"SETUID",
	I0826 11:40:30.102168  135795 command_runner.go:130] > # 	"SETPCAP",
	I0826 11:40:30.102174  135795 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0826 11:40:30.102179  135795 command_runner.go:130] > # 	"KILL",
	I0826 11:40:30.102184  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102195  135795 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0826 11:40:30.102207  135795 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0826 11:40:30.102211  135795 command_runner.go:130] > # add_inheritable_capabilities = false
	I0826 11:40:30.102221  135795 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0826 11:40:30.102234  135795 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0826 11:40:30.102242  135795 command_runner.go:130] > default_sysctls = [
	I0826 11:40:30.102251  135795 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0826 11:40:30.102256  135795 command_runner.go:130] > ]
	I0826 11:40:30.102264  135795 command_runner.go:130] > # List of devices on the host that a
	I0826 11:40:30.102274  135795 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0826 11:40:30.102283  135795 command_runner.go:130] > # allowed_devices = [
	I0826 11:40:30.102288  135795 command_runner.go:130] > # 	"/dev/fuse",
	I0826 11:40:30.102292  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102296  135795 command_runner.go:130] > # List of additional devices. specified as
	I0826 11:40:30.102310  135795 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0826 11:40:30.102321  135795 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0826 11:40:30.102333  135795 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0826 11:40:30.102343  135795 command_runner.go:130] > # additional_devices = [
	I0826 11:40:30.102348  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102357  135795 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0826 11:40:30.102364  135795 command_runner.go:130] > # cdi_spec_dirs = [
	I0826 11:40:30.102370  135795 command_runner.go:130] > # 	"/etc/cdi",
	I0826 11:40:30.102375  135795 command_runner.go:130] > # 	"/var/run/cdi",
	I0826 11:40:30.102382  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102391  135795 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0826 11:40:30.102404  135795 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0826 11:40:30.102411  135795 command_runner.go:130] > # Defaults to false.
	I0826 11:40:30.102419  135795 command_runner.go:130] > # device_ownership_from_security_context = false
	I0826 11:40:30.102433  135795 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0826 11:40:30.102442  135795 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0826 11:40:30.102451  135795 command_runner.go:130] > # hooks_dir = [
	I0826 11:40:30.102458  135795 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0826 11:40:30.102464  135795 command_runner.go:130] > # ]
	I0826 11:40:30.102471  135795 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0826 11:40:30.102483  135795 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0826 11:40:30.102495  135795 command_runner.go:130] > # its default mounts from the following two files:
	I0826 11:40:30.102503  135795 command_runner.go:130] > #
	I0826 11:40:30.102513  135795 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0826 11:40:30.102526  135795 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0826 11:40:30.102538  135795 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0826 11:40:30.102543  135795 command_runner.go:130] > #
	I0826 11:40:30.102549  135795 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0826 11:40:30.102556  135795 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0826 11:40:30.102565  135795 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0826 11:40:30.102573  135795 command_runner.go:130] > #      only add mounts it finds in this file.
	I0826 11:40:30.102581  135795 command_runner.go:130] > #
	I0826 11:40:30.102589  135795 command_runner.go:130] > # default_mounts_file = ""
	I0826 11:40:30.102599  135795 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0826 11:40:30.102610  135795 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0826 11:40:30.102619  135795 command_runner.go:130] > pids_limit = 1024
	I0826 11:40:30.102629  135795 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0826 11:40:30.102642  135795 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0826 11:40:30.102651  135795 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0826 11:40:30.102667  135795 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0826 11:40:30.102677  135795 command_runner.go:130] > # log_size_max = -1
	I0826 11:40:30.102688  135795 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0826 11:40:30.102700  135795 command_runner.go:130] > # log_to_journald = false
	I0826 11:40:30.102710  135795 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0826 11:40:30.102719  135795 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0826 11:40:30.102725  135795 command_runner.go:130] > # Path to directory for container attach sockets.
	I0826 11:40:30.102733  135795 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0826 11:40:30.102743  135795 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0826 11:40:30.102754  135795 command_runner.go:130] > # bind_mount_prefix = ""
	I0826 11:40:30.102763  135795 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0826 11:40:30.102770  135795 command_runner.go:130] > # read_only = false
	I0826 11:40:30.102782  135795 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0826 11:40:30.102791  135795 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0826 11:40:30.102799  135795 command_runner.go:130] > # live configuration reload.
	I0826 11:40:30.102804  135795 command_runner.go:130] > # log_level = "info"
	I0826 11:40:30.102810  135795 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0826 11:40:30.102816  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.102822  135795 command_runner.go:130] > # log_filter = ""
	I0826 11:40:30.102853  135795 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0826 11:40:30.102867  135795 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0826 11:40:30.102877  135795 command_runner.go:130] > # separated by comma.
	I0826 11:40:30.102889  135795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0826 11:40:30.102898  135795 command_runner.go:130] > # uid_mappings = ""
	I0826 11:40:30.102907  135795 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0826 11:40:30.102920  135795 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0826 11:40:30.102930  135795 command_runner.go:130] > # separated by comma.
	I0826 11:40:30.102942  135795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0826 11:40:30.102951  135795 command_runner.go:130] > # gid_mappings = ""
	I0826 11:40:30.102960  135795 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0826 11:40:30.102972  135795 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0826 11:40:30.102982  135795 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0826 11:40:30.102991  135795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0826 11:40:30.103000  135795 command_runner.go:130] > # minimum_mappable_uid = -1
	I0826 11:40:30.103011  135795 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0826 11:40:30.103024  135795 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0826 11:40:30.103036  135795 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0826 11:40:30.103050  135795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0826 11:40:30.103063  135795 command_runner.go:130] > # minimum_mappable_gid = -1
	I0826 11:40:30.103071  135795 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0826 11:40:30.103080  135795 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0826 11:40:30.103093  135795 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0826 11:40:30.103102  135795 command_runner.go:130] > # ctr_stop_timeout = 30
	I0826 11:40:30.103114  135795 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0826 11:40:30.103128  135795 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0826 11:40:30.103138  135795 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0826 11:40:30.103148  135795 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0826 11:40:30.103152  135795 command_runner.go:130] > drop_infra_ctr = false
	I0826 11:40:30.103159  135795 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0826 11:40:30.103171  135795 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0826 11:40:30.103186  135795 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0826 11:40:30.103196  135795 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0826 11:40:30.103207  135795 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0826 11:40:30.103219  135795 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0826 11:40:30.103229  135795 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0826 11:40:30.103237  135795 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0826 11:40:30.103241  135795 command_runner.go:130] > # shared_cpuset = ""
	I0826 11:40:30.103250  135795 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0826 11:40:30.103262  135795 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0826 11:40:30.103272  135795 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0826 11:40:30.103286  135795 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0826 11:40:30.103296  135795 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0826 11:40:30.103305  135795 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0826 11:40:30.103317  135795 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0826 11:40:30.103322  135795 command_runner.go:130] > # enable_criu_support = false
	I0826 11:40:30.103327  135795 command_runner.go:130] > # Enable/disable the generation of the container,
	I0826 11:40:30.103339  135795 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0826 11:40:30.103348  135795 command_runner.go:130] > # enable_pod_events = false
	I0826 11:40:30.103358  135795 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0826 11:40:30.103371  135795 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0826 11:40:30.103383  135795 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0826 11:40:30.103393  135795 command_runner.go:130] > # default_runtime = "runc"
	I0826 11:40:30.103401  135795 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0826 11:40:30.103411  135795 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0826 11:40:30.103424  135795 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0826 11:40:30.103439  135795 command_runner.go:130] > # creation as a file is not desired either.
	I0826 11:40:30.103454  135795 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0826 11:40:30.103465  135795 command_runner.go:130] > # the hostname is being managed dynamically.
	I0826 11:40:30.103476  135795 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0826 11:40:30.103481  135795 command_runner.go:130] > # ]
	I0826 11:40:30.103491  135795 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0826 11:40:30.103497  135795 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0826 11:40:30.103508  135795 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0826 11:40:30.103520  135795 command_runner.go:130] > # Each entry in the table should follow the format:
	I0826 11:40:30.103528  135795 command_runner.go:130] > #
	I0826 11:40:30.103536  135795 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0826 11:40:30.103547  135795 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0826 11:40:30.103599  135795 command_runner.go:130] > # runtime_type = "oci"
	I0826 11:40:30.103611  135795 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0826 11:40:30.103619  135795 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0826 11:40:30.103626  135795 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0826 11:40:30.103634  135795 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0826 11:40:30.103643  135795 command_runner.go:130] > # monitor_env = []
	I0826 11:40:30.103651  135795 command_runner.go:130] > # privileged_without_host_devices = false
	I0826 11:40:30.103660  135795 command_runner.go:130] > # allowed_annotations = []
	I0826 11:40:30.103669  135795 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0826 11:40:30.103675  135795 command_runner.go:130] > # Where:
	I0826 11:40:30.103681  135795 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0826 11:40:30.103695  135795 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0826 11:40:30.103708  135795 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0826 11:40:30.103721  135795 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0826 11:40:30.103727  135795 command_runner.go:130] > #   in $PATH.
	I0826 11:40:30.103737  135795 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0826 11:40:30.103748  135795 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0826 11:40:30.103758  135795 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0826 11:40:30.103762  135795 command_runner.go:130] > #   state.
	I0826 11:40:30.103771  135795 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0826 11:40:30.103783  135795 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0826 11:40:30.103796  135795 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0826 11:40:30.103808  135795 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0826 11:40:30.103820  135795 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0826 11:40:30.103830  135795 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0826 11:40:30.103842  135795 command_runner.go:130] > #   The currently recognized values are:
	I0826 11:40:30.103856  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0826 11:40:30.103871  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0826 11:40:30.103884  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0826 11:40:30.103896  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0826 11:40:30.103911  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0826 11:40:30.103923  135795 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0826 11:40:30.103931  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0826 11:40:30.103943  135795 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0826 11:40:30.103956  135795 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0826 11:40:30.103969  135795 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0826 11:40:30.103979  135795 command_runner.go:130] > #   deprecated option "conmon".
	I0826 11:40:30.103989  135795 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0826 11:40:30.104000  135795 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0826 11:40:30.104012  135795 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0826 11:40:30.104020  135795 command_runner.go:130] > #   should be moved to the container's cgroup
	I0826 11:40:30.104029  135795 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0826 11:40:30.104040  135795 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0826 11:40:30.104050  135795 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0826 11:40:30.104061  135795 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0826 11:40:30.104068  135795 command_runner.go:130] > #
	I0826 11:40:30.104074  135795 command_runner.go:130] > # Using the seccomp notifier feature:
	I0826 11:40:30.104082  135795 command_runner.go:130] > #
	I0826 11:40:30.104092  135795 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0826 11:40:30.104102  135795 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0826 11:40:30.104106  135795 command_runner.go:130] > #
	I0826 11:40:30.104115  135795 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0826 11:40:30.104127  135795 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0826 11:40:30.104133  135795 command_runner.go:130] > #
	I0826 11:40:30.104143  135795 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0826 11:40:30.104152  135795 command_runner.go:130] > # feature.
	I0826 11:40:30.104158  135795 command_runner.go:130] > #
	I0826 11:40:30.104170  135795 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0826 11:40:30.104181  135795 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0826 11:40:30.104188  135795 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0826 11:40:30.104200  135795 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0826 11:40:30.104212  135795 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0826 11:40:30.104220  135795 command_runner.go:130] > #
	I0826 11:40:30.104232  135795 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0826 11:40:30.104244  135795 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0826 11:40:30.104252  135795 command_runner.go:130] > #
	I0826 11:40:30.104262  135795 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0826 11:40:30.104271  135795 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0826 11:40:30.104274  135795 command_runner.go:130] > #
	I0826 11:40:30.104282  135795 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0826 11:40:30.104294  135795 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0826 11:40:30.104303  135795 command_runner.go:130] > # limitation.
	I0826 11:40:30.104312  135795 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0826 11:40:30.104322  135795 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0826 11:40:30.104329  135795 command_runner.go:130] > runtime_type = "oci"
	I0826 11:40:30.104337  135795 command_runner.go:130] > runtime_root = "/run/runc"
	I0826 11:40:30.104343  135795 command_runner.go:130] > runtime_config_path = ""
	I0826 11:40:30.104353  135795 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0826 11:40:30.104357  135795 command_runner.go:130] > monitor_cgroup = "pod"
	I0826 11:40:30.104365  135795 command_runner.go:130] > monitor_exec_cgroup = ""
	I0826 11:40:30.104371  135795 command_runner.go:130] > monitor_env = [
	I0826 11:40:30.104384  135795 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0826 11:40:30.104392  135795 command_runner.go:130] > ]
	I0826 11:40:30.104399  135795 command_runner.go:130] > privileged_without_host_devices = false
	I0826 11:40:30.104412  135795 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0826 11:40:30.104423  135795 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0826 11:40:30.104433  135795 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0826 11:40:30.104444  135795 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0826 11:40:30.104457  135795 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0826 11:40:30.104469  135795 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0826 11:40:30.104486  135795 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0826 11:40:30.104501  135795 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0826 11:40:30.104511  135795 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0826 11:40:30.104522  135795 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0826 11:40:30.104526  135795 command_runner.go:130] > # Example:
	I0826 11:40:30.104530  135795 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0826 11:40:30.104536  135795 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0826 11:40:30.104544  135795 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0826 11:40:30.104556  135795 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0826 11:40:30.104563  135795 command_runner.go:130] > # cpuset = 0
	I0826 11:40:30.104569  135795 command_runner.go:130] > # cpushares = "0-1"
	I0826 11:40:30.104578  135795 command_runner.go:130] > # Where:
	I0826 11:40:30.104585  135795 command_runner.go:130] > # The workload name is workload-type.
	I0826 11:40:30.104600  135795 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0826 11:40:30.104610  135795 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0826 11:40:30.104616  135795 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0826 11:40:30.104629  135795 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0826 11:40:30.104642  135795 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0826 11:40:30.104652  135795 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0826 11:40:30.104666  135795 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0826 11:40:30.104676  135795 command_runner.go:130] > # Default value is set to true
	I0826 11:40:30.104684  135795 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0826 11:40:30.104695  135795 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0826 11:40:30.104702  135795 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0826 11:40:30.104707  135795 command_runner.go:130] > # Default value is set to 'false'
	I0826 11:40:30.104717  135795 command_runner.go:130] > # disable_hostport_mapping = false
	I0826 11:40:30.104735  135795 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0826 11:40:30.104744  135795 command_runner.go:130] > #
	I0826 11:40:30.104753  135795 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0826 11:40:30.104766  135795 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0826 11:40:30.104778  135795 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0826 11:40:30.104788  135795 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0826 11:40:30.104794  135795 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0826 11:40:30.104799  135795 command_runner.go:130] > [crio.image]
	I0826 11:40:30.104811  135795 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0826 11:40:30.104822  135795 command_runner.go:130] > # default_transport = "docker://"
	I0826 11:40:30.104834  135795 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0826 11:40:30.104851  135795 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0826 11:40:30.104860  135795 command_runner.go:130] > # global_auth_file = ""
	I0826 11:40:30.104869  135795 command_runner.go:130] > # The image used to instantiate infra containers.
	I0826 11:40:30.104877  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.104883  135795 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0826 11:40:30.104895  135795 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0826 11:40:30.104905  135795 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0826 11:40:30.104916  135795 command_runner.go:130] > # This option supports live configuration reload.
	I0826 11:40:30.104927  135795 command_runner.go:130] > # pause_image_auth_file = ""
	I0826 11:40:30.104939  135795 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0826 11:40:30.104951  135795 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0826 11:40:30.104960  135795 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0826 11:40:30.104966  135795 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0826 11:40:30.104972  135795 command_runner.go:130] > # pause_command = "/pause"
	I0826 11:40:30.104978  135795 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0826 11:40:30.104986  135795 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0826 11:40:30.104995  135795 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0826 11:40:30.105011  135795 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0826 11:40:30.105023  135795 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0826 11:40:30.105035  135795 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0826 11:40:30.105045  135795 command_runner.go:130] > # pinned_images = [
	I0826 11:40:30.105051  135795 command_runner.go:130] > # ]
	I0826 11:40:30.105063  135795 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0826 11:40:30.105072  135795 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0826 11:40:30.105078  135795 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0826 11:40:30.105086  135795 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0826 11:40:30.105091  135795 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0826 11:40:30.105095  135795 command_runner.go:130] > # signature_policy = ""
	I0826 11:40:30.105101  135795 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0826 11:40:30.105109  135795 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0826 11:40:30.105115  135795 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0826 11:40:30.105124  135795 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0826 11:40:30.105129  135795 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0826 11:40:30.105139  135795 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0826 11:40:30.105153  135795 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0826 11:40:30.105166  135795 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0826 11:40:30.105175  135795 command_runner.go:130] > # changing them here.
	I0826 11:40:30.105182  135795 command_runner.go:130] > # insecure_registries = [
	I0826 11:40:30.105190  135795 command_runner.go:130] > # ]
	I0826 11:40:30.105201  135795 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0826 11:40:30.105210  135795 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0826 11:40:30.105214  135795 command_runner.go:130] > # image_volumes = "mkdir"
	I0826 11:40:30.105219  135795 command_runner.go:130] > # Temporary directory to use for storing big files
	I0826 11:40:30.105225  135795 command_runner.go:130] > # big_files_temporary_dir = ""
	I0826 11:40:30.105235  135795 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0826 11:40:30.105241  135795 command_runner.go:130] > # CNI plugins.
	I0826 11:40:30.105244  135795 command_runner.go:130] > [crio.network]
	I0826 11:40:30.105250  135795 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0826 11:40:30.105257  135795 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0826 11:40:30.105262  135795 command_runner.go:130] > # cni_default_network = ""
	I0826 11:40:30.105269  135795 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0826 11:40:30.105274  135795 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0826 11:40:30.105281  135795 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0826 11:40:30.105285  135795 command_runner.go:130] > # plugin_dirs = [
	I0826 11:40:30.105288  135795 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0826 11:40:30.105292  135795 command_runner.go:130] > # ]
	I0826 11:40:30.105298  135795 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0826 11:40:30.105304  135795 command_runner.go:130] > [crio.metrics]
	I0826 11:40:30.105309  135795 command_runner.go:130] > # Globally enable or disable metrics support.
	I0826 11:40:30.105314  135795 command_runner.go:130] > enable_metrics = true
	I0826 11:40:30.105319  135795 command_runner.go:130] > # Specify enabled metrics collectors.
	I0826 11:40:30.105326  135795 command_runner.go:130] > # Per default all metrics are enabled.
	I0826 11:40:30.105332  135795 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0826 11:40:30.105340  135795 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0826 11:40:30.105345  135795 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0826 11:40:30.105351  135795 command_runner.go:130] > # metrics_collectors = [
	I0826 11:40:30.105355  135795 command_runner.go:130] > # 	"operations",
	I0826 11:40:30.105361  135795 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0826 11:40:30.105370  135795 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0826 11:40:30.105377  135795 command_runner.go:130] > # 	"operations_errors",
	I0826 11:40:30.105386  135795 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0826 11:40:30.105391  135795 command_runner.go:130] > # 	"image_pulls_by_name",
	I0826 11:40:30.105397  135795 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0826 11:40:30.105401  135795 command_runner.go:130] > # 	"image_pulls_failures",
	I0826 11:40:30.105407  135795 command_runner.go:130] > # 	"image_pulls_successes",
	I0826 11:40:30.105412  135795 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0826 11:40:30.105429  135795 command_runner.go:130] > # 	"image_layer_reuse",
	I0826 11:40:30.105434  135795 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0826 11:40:30.105438  135795 command_runner.go:130] > # 	"containers_oom_total",
	I0826 11:40:30.105442  135795 command_runner.go:130] > # 	"containers_oom",
	I0826 11:40:30.105449  135795 command_runner.go:130] > # 	"processes_defunct",
	I0826 11:40:30.105453  135795 command_runner.go:130] > # 	"operations_total",
	I0826 11:40:30.105459  135795 command_runner.go:130] > # 	"operations_latency_seconds",
	I0826 11:40:30.105463  135795 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0826 11:40:30.105467  135795 command_runner.go:130] > # 	"operations_errors_total",
	I0826 11:40:30.105471  135795 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0826 11:40:30.105476  135795 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0826 11:40:30.105482  135795 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0826 11:40:30.105486  135795 command_runner.go:130] > # 	"image_pulls_success_total",
	I0826 11:40:30.105495  135795 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0826 11:40:30.105499  135795 command_runner.go:130] > # 	"containers_oom_count_total",
	I0826 11:40:30.105504  135795 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0826 11:40:30.105509  135795 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0826 11:40:30.105514  135795 command_runner.go:130] > # ]
	I0826 11:40:30.105519  135795 command_runner.go:130] > # The port on which the metrics server will listen.
	I0826 11:40:30.105523  135795 command_runner.go:130] > # metrics_port = 9090
	I0826 11:40:30.105528  135795 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0826 11:40:30.105534  135795 command_runner.go:130] > # metrics_socket = ""
	I0826 11:40:30.105538  135795 command_runner.go:130] > # The certificate for the secure metrics server.
	I0826 11:40:30.105546  135795 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0826 11:40:30.105552  135795 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0826 11:40:30.105559  135795 command_runner.go:130] > # certificate on any modification event.
	I0826 11:40:30.105563  135795 command_runner.go:130] > # metrics_cert = ""
	I0826 11:40:30.105570  135795 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0826 11:40:30.105575  135795 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0826 11:40:30.105581  135795 command_runner.go:130] > # metrics_key = ""
	I0826 11:40:30.105587  135795 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0826 11:40:30.105593  135795 command_runner.go:130] > [crio.tracing]
	I0826 11:40:30.105599  135795 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0826 11:40:30.105609  135795 command_runner.go:130] > # enable_tracing = false
	I0826 11:40:30.105614  135795 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0826 11:40:30.105621  135795 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0826 11:40:30.105629  135795 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0826 11:40:30.105635  135795 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0826 11:40:30.105640  135795 command_runner.go:130] > # CRI-O NRI configuration.
	I0826 11:40:30.105646  135795 command_runner.go:130] > [crio.nri]
	I0826 11:40:30.105651  135795 command_runner.go:130] > # Globally enable or disable NRI.
	I0826 11:40:30.105657  135795 command_runner.go:130] > # enable_nri = false
	I0826 11:40:30.105662  135795 command_runner.go:130] > # NRI socket to listen on.
	I0826 11:40:30.105667  135795 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0826 11:40:30.105673  135795 command_runner.go:130] > # NRI plugin directory to use.
	I0826 11:40:30.105678  135795 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0826 11:40:30.105682  135795 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0826 11:40:30.105689  135795 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0826 11:40:30.105694  135795 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0826 11:40:30.105699  135795 command_runner.go:130] > # nri_disable_connections = false
	I0826 11:40:30.105704  135795 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0826 11:40:30.105711  135795 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0826 11:40:30.105716  135795 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0826 11:40:30.105722  135795 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0826 11:40:30.105728  135795 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0826 11:40:30.105734  135795 command_runner.go:130] > [crio.stats]
	I0826 11:40:30.105741  135795 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0826 11:40:30.105751  135795 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0826 11:40:30.105758  135795 command_runner.go:130] > # stats_collection_period = 0
	I0826 11:40:30.105889  135795 cni.go:84] Creating CNI manager for ""
	I0826 11:40:30.105901  135795 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0826 11:40:30.105910  135795 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:40:30.105933  135795 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-523807 NodeName:multinode-523807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 11:40:30.106058  135795 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-523807"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:40:30.106125  135795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 11:40:30.116772  135795 command_runner.go:130] > kubeadm
	I0826 11:40:30.116794  135795 command_runner.go:130] > kubectl
	I0826 11:40:30.116798  135795 command_runner.go:130] > kubelet
	I0826 11:40:30.116819  135795 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:40:30.116881  135795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 11:40:30.126362  135795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0826 11:40:30.143009  135795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:40:30.159667  135795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0826 11:40:30.175805  135795 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0826 11:40:30.179928  135795 command_runner.go:130] > 192.168.39.26	control-plane.minikube.internal
	I0826 11:40:30.180018  135795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:40:30.327586  135795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:40:30.346272  135795 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807 for IP: 192.168.39.26
	I0826 11:40:30.346295  135795 certs.go:194] generating shared ca certs ...
	I0826 11:40:30.346313  135795 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:40:30.346453  135795 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:40:30.346489  135795 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:40:30.346498  135795 certs.go:256] generating profile certs ...
	I0826 11:40:30.346572  135795 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/client.key
	I0826 11:40:30.346656  135795 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.key.c759d2c4
	I0826 11:40:30.346691  135795 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.key
	I0826 11:40:30.346702  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0826 11:40:30.346716  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0826 11:40:30.346728  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0826 11:40:30.346741  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0826 11:40:30.346753  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0826 11:40:30.346767  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0826 11:40:30.346779  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0826 11:40:30.346793  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0826 11:40:30.346897  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:40:30.346935  135795 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:40:30.346945  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:40:30.346970  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:40:30.346998  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:40:30.347019  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:40:30.347057  135795 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:40:30.347087  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.347100  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem -> /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.347112  135795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.347787  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:40:30.377058  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:40:30.407237  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:40:30.433177  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:40:30.459426  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 11:40:30.484682  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 11:40:30.509102  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:40:30.533895  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/multinode-523807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:40:30.558862  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:40:30.583727  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:40:30.608720  135795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:40:30.633010  135795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:40:30.650998  135795 ssh_runner.go:195] Run: openssl version
	I0826 11:40:30.657215  135795 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0826 11:40:30.657323  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:40:30.668879  135795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.673723  135795 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.673764  135795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.673818  135795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:40:30.679946  135795 command_runner.go:130] > b5213941
	I0826 11:40:30.680119  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:40:30.690002  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:40:30.701661  135795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.706974  135795 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.707016  135795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.707078  135795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:40:30.713003  135795 command_runner.go:130] > 51391683
	I0826 11:40:30.713107  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:40:30.723045  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:40:30.734778  135795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.739623  135795 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.739660  135795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.739707  135795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:40:30.745857  135795 command_runner.go:130] > 3ec20f2e
	I0826 11:40:30.745944  135795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:40:30.756365  135795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:40:30.761404  135795 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:40:30.761442  135795 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0826 11:40:30.761451  135795 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0826 11:40:30.761460  135795 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0826 11:40:30.761470  135795 command_runner.go:130] > Access: 2024-08-26 11:33:40.392209556 +0000
	I0826 11:40:30.761478  135795 command_runner.go:130] > Modify: 2024-08-26 11:33:40.392209556 +0000
	I0826 11:40:30.761485  135795 command_runner.go:130] > Change: 2024-08-26 11:33:40.392209556 +0000
	I0826 11:40:30.761494  135795 command_runner.go:130] >  Birth: 2024-08-26 11:33:40.392209556 +0000
	I0826 11:40:30.761584  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 11:40:30.767774  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.767888  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 11:40:30.773988  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.774113  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 11:40:30.780128  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.780234  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 11:40:30.786305  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.786421  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 11:40:30.792629  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.792719  135795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 11:40:30.798872  135795 command_runner.go:130] > Certificate will not expire
	I0826 11:40:30.798965  135795 kubeadm.go:392] StartCluster: {Name:multinode-523807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-523807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:40:30.799083  135795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:40:30.799136  135795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:40:30.832866  135795 command_runner.go:130] > c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4
	I0826 11:40:30.832895  135795 command_runner.go:130] > 5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51
	I0826 11:40:30.832903  135795 command_runner.go:130] > de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af
	I0826 11:40:30.832912  135795 command_runner.go:130] > 0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1
	I0826 11:40:30.832920  135795 command_runner.go:130] > 37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5
	I0826 11:40:30.832928  135795 command_runner.go:130] > 33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599
	I0826 11:40:30.832935  135795 command_runner.go:130] > 50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d
	I0826 11:40:30.832943  135795 command_runner.go:130] > 076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470
	I0826 11:40:30.834307  135795 cri.go:89] found id: "c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4"
	I0826 11:40:30.834330  135795 cri.go:89] found id: "5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51"
	I0826 11:40:30.834337  135795 cri.go:89] found id: "de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af"
	I0826 11:40:30.834341  135795 cri.go:89] found id: "0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1"
	I0826 11:40:30.834345  135795 cri.go:89] found id: "37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5"
	I0826 11:40:30.834349  135795 cri.go:89] found id: "33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599"
	I0826 11:40:30.834353  135795 cri.go:89] found id: "50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d"
	I0826 11:40:30.834357  135795 cri.go:89] found id: "076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470"
	I0826 11:40:30.834361  135795 cri.go:89] found id: ""
	I0826 11:40:30.834433  135795 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.919463304Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=04050ef9-bca5-4aad-91d1-f818e6247e7c name=/runtime.v1.RuntimeService/Status
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.919522501Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=04050ef9-bca5-4aad-91d1-f818e6247e7c name=/runtime.v1.RuntimeService/Status
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.920481286Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3,Verbose:false,}" file="otel-collector/interceptors.go:62" id=16f1e771-dd4b-4b5d-9d23-741bfe958f82 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.920924282Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724672437652548223,StartedAt:1724672437679232768,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240813-c6f155d6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b20ab0d2-de15-4b2b-a0d8-bf255f095a2c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b20ab0d2-de15-4b2b-a0d8-bf255f095a2c/containers/kindnet-cni/3a188bf8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath
:/etc/cni/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/b20ab0d2-de15-4b2b-a0d8-bf255f095a2c/volumes/kubernetes.io~projected/kube-api-access-r5q2p,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-4s28f_b20ab0d2-de15-4b2b-a0d8-bf255f095a2c/kindnet-cni/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=16f1e771-dd4b-4b5d-9d23-741bfe
958f82 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.921999815Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a58e22e9-1d66-4dcf-9acf-bcf03b87c47d name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.922695920Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724672437611524388,StartedAt:1724672437645125327,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"co
ntainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/086be54b-fdd5-41ba-95de-0bf7fb037712/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/086be54b-fdd5-41ba-95de-0bf7fb037712/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/086be54b-fdd5-41ba-95de-0bf7fb037712/containers/coredns/bcde6da9,Readonly:false,SelinuxRelabel:false,Propagation
:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/086be54b-fdd5-41ba-95de-0bf7fb037712/volumes/kubernetes.io~projected/kube-api-access-hlzbh,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-6f6b679f8f-h6q94_086be54b-fdd5-41ba-95de-0bf7fb037712/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a58e22e9-1d66-4dcf-9acf-bcf03b87c47d name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.923288804Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:38e6e68fd8c2f5b89d47f315ff3296b9b4817c34234d516baa4f15f24e9337c8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0a68d7cb-826d-4fd6-9d59-47d6356c23fd name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.923417334Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:38e6e68fd8c2f5b89d47f315ff3296b9b4817c34234d516baa4f15f24e9337c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724672437471465711,StartedAt:1724672437562696419,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1cdb6c12-0a50-405a-a0e7-854d30f4c4e8/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1cdb6c12-0a50-405a-a0e7-854d30f4c4e8/containers/storage-provisioner/983fdf22,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/1cdb6c12-0a50-405a-a0e7-854d30f4c4e8/volumes/kubernetes.io~projected/kube-api-access-5trnn,Readonly:true,SelinuxRelabel:fals
e,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_1cdb6c12-0a50-405a-a0e7-854d30f4c4e8/storage-provisioner/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0a68d7cb-826d-4fd6-9d59-47d6356c23fd name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.923830155Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=80b0669f-1e72-4c6b-8c37-4b1f056f5398 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.923943195Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724672437453525477,StartedAt:1724672437540845261,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/cfd87747-7f1a-4c0a-85ff-26da3f196c1d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/cfd87747-7f1a-4c0a-85ff-26da3f196c1d/containers/kube-proxy/b88c94fc,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/l
ib/kubelet/pods/cfd87747-7f1a-4c0a-85ff-26da3f196c1d/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/cfd87747-7f1a-4c0a-85ff-26da3f196c1d/volumes/kubernetes.io~projected/kube-api-access-jvb4v,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-9ppdx_cfd87747-7f1a-4c0a-85ff-26da3f196c1d/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-co
llector/interceptors.go:74" id=80b0669f-1e72-4c6b-8c37-4b1f056f5398 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.924392738Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=afce6255-a5d9-4c2c-bce0-0b5986da197e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.924483919Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724672433662652537,StartedAt:1724672433774806161,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/20fa0c38e30c83c40815559613509b2a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/20fa0c38e30c83c40815559613509b2a/containers/kube-scheduler/11be2ccf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-multinode-523807_20fa0c38e30c83c40815559613509b2a/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeri
od:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=afce6255-a5d9-4c2c-bce0-0b5986da197e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.924797496Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=89bd9e91-b70b-4732-a9c6-c1c7a8b967b7 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.924898179Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724672433624489333,StartedAt:1724672433700546215,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a2e275c84d92c417e9ab4c8527035ad1/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a2e275c84d92c417e9ab4c8527035ad1/containers/etcd/1db4f50e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-m
ultinode-523807_a2e275c84d92c417e9ab4c8527035ad1/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=89bd9e91-b70b-4732-a9c6-c1c7a8b967b7 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.925377253Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=9bffa534-b666-464b-9079-63a350d24cee name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.925493213Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724672433531885384,StartedAt:1724672433627271450,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6c66593f4a2bc2474ad2c4283feb2ce6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6c66593f4a2bc2474ad2c4283feb2ce6/containers/kube-controller-manager/ecb0ee03,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,
UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-multinode-523807_6c66593f4a2bc2474ad2c4283feb2ce6/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMem
s:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9bffa534-b666-464b-9079-63a350d24cee name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.925854167Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=80305f70-1aee-4f27-89e5-e7c6873c71ec name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.925955701Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1724672433487537993,StartedAt:1724672433590132327,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/90388244b8abc4e5e89e0c250d1d47da/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/90388244b8abc4e5e89e0c250d1d47da/containers/kube-apiserver/fc29b2fb,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Containe
rPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-multinode-523807_90388244b8abc4e5e89e0c250d1d47da/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=80305f70-1aee-4f27-89e5-e7c6873c71ec name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.951594522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eaf61e15-710a-4165-88d1-915178b64fc1 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.951692550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eaf61e15-710a-4165-88d1-915178b64fc1 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.952893010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37fe709b-2b0f-4e40-b8ca-ffa5e6b377f0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.953384400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672682953361037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37fe709b-2b0f-4e40-b8ca-ffa5e6b377f0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.953993759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8d5b1d1-3c64-4e61-ab84-8bed38235e15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.954067240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8d5b1d1-3c64-4e61-ab84-8bed38235e15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:44:42 multinode-523807 crio[2752]: time="2024-08-26 11:44:42.954439831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb3e29efaf44685db84a8043a827c6f265f8d2d117a70f828b95ee630f332823,PodSandboxId:f69ccf7999b51dfbb2eaf78218b6b8592a6d168bcfc5a83fef835c690927feaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724672471076445349,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3,PodSandboxId:a67d34ee793ce1f666652aa6beedf631d4b0f835e53adbc4beb528ee9d519e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724672437494999552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500,PodSandboxId:0239eb4c2a55e0049b12686280780cb11144c8005ed848c54558d420173c0c64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724672437437031346,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e6e68fd8c2f5b89d47f315ff3296b9b4817c34234d516baa4f15f24e9337c8,PodSandboxId:b307a467a746d5beefc783eb0551e651831f02ec631188a86f3afe14064f88e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724672437376668721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8,PodSandboxId:9782bb8928fec139d1e8d1b075f49de99ea1139444b115f4542b7ac992f69cbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724672437327860990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a,PodSandboxId:155a6fa8ec21d7a4b8af3a50f6010767700e3334fd03b465ee2f08b00ee6a5c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724672433530301499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb,PodSandboxId:7468960e03094d2be0a8b28ba7f740757d1a1dfc5a2eb2a5a41dec3aa37aa33b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724672433528258488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c,PodSandboxId:03c91b9190e7cfd3823771dc25e44280c46f73443c52dc86a6f9e1e72ee69399,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724672433434765265,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f,PodSandboxId:38902244e0365e6722d9c5929255741afa58fe57489a60d02317dbf89b96b356,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724672433396555658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:174e520370eee714aa7a518d55733315dcd9f005c58b0a4dab2ef0ddb0267744,PodSandboxId:1837276db9d34118447c719b5cc4e1e149a94fad8d345c6892b4b57140625b04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724672106886046571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9mhm9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 133c187f-5b89-4d46-8bb3-3c9b553dd3e5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4,PodSandboxId:84d0e1515e4f69de62085bdc61cd4ddb01b1c963f9138b977a8c9e483a133a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724672050187163123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h6q94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086be54b-fdd5-41ba-95de-0bf7fb037712,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5337003675ddbc790e7a6aab7acfa845b2a62b567885f1e63567966cb60edb51,PodSandboxId:a30d77461961207849ea0559673ad52d86e2ad731b4b38b89e5414601db1d5d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724672050128935769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1cdb6c12-0a50-405a-a0e7-854d30f4c4e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af,PodSandboxId:4ff94d658cc3c5c3604896bb63581d49246f218118b98adf0951b56caa05efcb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724672038604898746,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4s28f,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b20ab0d2-de15-4b2b-a0d8-bf255f095a2c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1,PodSandboxId:934780c4abd0f40d86545ba3d361af864881111d3af43dbbc1463145386cd5f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724672034831064703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ppdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cfd87747-7f1a-4c0a-85ff-26da3f196c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5,PodSandboxId:814e589a28c700b14d2917a760b41b2f90df114e47124f7476133f75236639ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724672024247378652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
fa0c38e30c83c40815559613509b2a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599,PodSandboxId:36d88d25591a2fbdac92f4897801691d8911eff6b6529d1312738b47dd6c0ba6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724672024180375602,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e275c84d92c417e9ab4c8527035ad1,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d,PodSandboxId:5d9702f9956d3c081ada07b457339fbe61a909672bf059370f558ee422ab739a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724672024161227902,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90388244b8abc4e5e89e0c250d1d47da,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470,PodSandboxId:11e17415ffccfac6181a146766cdc24bd364f0e69cf0fcc1b04d5d7233f4bb65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724672024101428873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-523807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c66593f4a2bc2474ad2c4283feb2ce6,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8d5b1d1-3c64-4e61-ab84-8bed38235e15 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb3e29efaf446       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   f69ccf7999b51       busybox-7dff88458-9mhm9
	5b1205006e366       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   a67d34ee793ce       kindnet-4s28f
	bf90c23162ca9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   0239eb4c2a55e       coredns-6f6b679f8f-h6q94
	38e6e68fd8c2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   b307a467a746d       storage-provisioner
	f42d1d54cf96f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   9782bb8928fec       kube-proxy-9ppdx
	b40b91469f6f2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   155a6fa8ec21d       kube-scheduler-multinode-523807
	7cca73dc65776       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   7468960e03094       etcd-multinode-523807
	8562aeb5a0efc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   03c91b9190e7c       kube-controller-manager-multinode-523807
	9a1e7fd44c56a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   38902244e0365       kube-apiserver-multinode-523807
	174e520370eee       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   1837276db9d34       busybox-7dff88458-9mhm9
	c8e515f3a0923       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   84d0e1515e4f6       coredns-6f6b679f8f-h6q94
	5337003675ddb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   a30d774619612       storage-provisioner
	de944421bc4b9       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   4ff94d658cc3c       kindnet-4s28f
	0e1d877a87d25       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   934780c4abd0f       kube-proxy-9ppdx
	37dbc154c98a1       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   814e589a28c70       kube-scheduler-multinode-523807
	33470455c3b47       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   36d88d25591a2       etcd-multinode-523807
	50ee5bf6f5578       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   5d9702f9956d3       kube-apiserver-multinode-523807
	076c6b1d077f6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   11e17415ffccf       kube-controller-manager-multinode-523807
	
	
	==> coredns [bf90c23162ca929eb2fc08c534b129617cc5aca3c49808ed3be5926fe35d2500] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44340 - 29004 "HINFO IN 4779146656012115169.2652164064989986983. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012425361s
	
	
	==> coredns [c8e515f3a0923ca29d89c9ee5627d17e0dc1e9ea22abeb869253290c47f269d4] <==
	[INFO] 10.244.1.2:39829 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001939099s
	[INFO] 10.244.1.2:52993 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088667s
	[INFO] 10.244.1.2:36666 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105643s
	[INFO] 10.244.1.2:55897 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0013678s
	[INFO] 10.244.1.2:37900 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000060202s
	[INFO] 10.244.1.2:46665 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087041s
	[INFO] 10.244.1.2:48703 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061894s
	[INFO] 10.244.0.3:33209 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117078s
	[INFO] 10.244.0.3:46758 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056477s
	[INFO] 10.244.0.3:39695 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050685s
	[INFO] 10.244.0.3:53216 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091954s
	[INFO] 10.244.1.2:42981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157631s
	[INFO] 10.244.1.2:55066 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158605s
	[INFO] 10.244.1.2:34567 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011432s
	[INFO] 10.244.1.2:50043 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007847s
	[INFO] 10.244.0.3:53946 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100062s
	[INFO] 10.244.0.3:48632 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105501s
	[INFO] 10.244.0.3:43563 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077443s
	[INFO] 10.244.0.3:39482 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008123s
	[INFO] 10.244.1.2:38022 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144892s
	[INFO] 10.244.1.2:45065 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130826s
	[INFO] 10.244.1.2:60856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092068s
	[INFO] 10.244.1.2:53154 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081147s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-523807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-523807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=multinode-523807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_33_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:33:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-523807
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:44:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:40:36 +0000   Mon, 26 Aug 2024 11:33:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:40:36 +0000   Mon, 26 Aug 2024 11:33:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:40:36 +0000   Mon, 26 Aug 2024 11:33:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:40:36 +0000   Mon, 26 Aug 2024 11:34:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    multinode-523807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3aabec31f054fd2915bbab4bb374ee9
	  System UUID:                c3aabec3-1f05-4fd2-915b-bab4bb374ee9
	  Boot ID:                    a941a4b1-20f0-4947-ba1e-78491d4e2453
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9mhm9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 coredns-6f6b679f8f-h6q94                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-523807                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-4s28f                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-523807             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-523807    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9ppdx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-523807             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node multinode-523807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node multinode-523807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node multinode-523807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-523807 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-523807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-523807 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-523807 event: Registered Node multinode-523807 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-523807 status is now: NodeReady
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node multinode-523807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node multinode-523807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node multinode-523807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node multinode-523807 event: Registered Node multinode-523807 in Controller
	
	
	Name:               multinode-523807-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-523807-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=multinode-523807
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_26T11_41_19_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:41:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-523807-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:42:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 26 Aug 2024 11:41:48 +0000   Mon, 26 Aug 2024 11:42:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 26 Aug 2024 11:41:48 +0000   Mon, 26 Aug 2024 11:42:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 26 Aug 2024 11:41:48 +0000   Mon, 26 Aug 2024 11:42:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 26 Aug 2024 11:41:48 +0000   Mon, 26 Aug 2024 11:42:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    multinode-523807-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 46bde34fa11942fb905e01011870dca1
	  System UUID:                46bde34f-a119-42fb-905e-01011870dca1
	  Boot ID:                    b65364d4-f9c3-433f-bd49-b7eff1dd8e80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vwpns    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kindnet-48gc2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-4v7w6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-523807-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-523807-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-523807-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-523807-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-523807-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-523807-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-523807-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-523807-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-523807-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060319] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.172350] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.140476] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.290235] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.984982] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +3.729031] systemd-fstab-generator[886]: Ignoring "noauto" option for root device
	[  +0.054813] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.482907] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.077944] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.206246] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.120884] kauditd_printk_skb: 18 callbacks suppressed
	[Aug26 11:34] kauditd_printk_skb: 69 callbacks suppressed
	[Aug26 11:35] kauditd_printk_skb: 14 callbacks suppressed
	[Aug26 11:40] systemd-fstab-generator[2670]: Ignoring "noauto" option for root device
	[  +0.146686] systemd-fstab-generator[2682]: Ignoring "noauto" option for root device
	[  +0.175042] systemd-fstab-generator[2696]: Ignoring "noauto" option for root device
	[  +0.155608] systemd-fstab-generator[2708]: Ignoring "noauto" option for root device
	[  +0.283634] systemd-fstab-generator[2736]: Ignoring "noauto" option for root device
	[  +3.476818] systemd-fstab-generator[2839]: Ignoring "noauto" option for root device
	[  +2.321577] systemd-fstab-generator[2959]: Ignoring "noauto" option for root device
	[  +0.082546] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.000514] kauditd_printk_skb: 82 callbacks suppressed
	[  +9.496596] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.115004] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[Aug26 11:41] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [33470455c3b47334636f9e606c98093c47289477e50a747f2eea3cc1c2700599] <==
	{"level":"info","ts":"2024-08-26T11:33:44.778545Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T11:33:44.774234Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:33:44.779228Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:33:44.779271Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:33:44.779897Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T11:33:44.780609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.26:2379"}
	{"level":"info","ts":"2024-08-26T11:34:41.131148Z","caller":"traceutil/trace.go:171","msg":"trace[89642543] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"146.30578ms","start":"2024-08-26T11:34:40.984816Z","end":"2024-08-26T11:34:41.131121Z","steps":["trace[89642543] 'process raft request'  (duration: 143.9325ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:35:39.675752Z","caller":"traceutil/trace.go:171","msg":"trace[1771133621] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"243.834037ms","start":"2024-08-26T11:35:39.431894Z","end":"2024-08-26T11:35:39.675728Z","steps":["trace[1771133621] 'process raft request'  (duration: 174.174622ms)","trace[1771133621] 'compare'  (duration: 69.542075ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T11:35:39.676032Z","caller":"traceutil/trace.go:171","msg":"trace[562194986] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"219.407946ms","start":"2024-08-26T11:35:39.456611Z","end":"2024-08-26T11:35:39.676019Z","steps":["trace[562194986] 'read index received'  (duration: 149.445691ms)","trace[562194986] 'applied index is now lower than readState.Index'  (duration: 69.960986ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T11:35:39.676333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.633916ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T11:35:39.677544Z","caller":"traceutil/trace.go:171","msg":"trace[1758875829] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:617; }","duration":"220.921411ms","start":"2024-08-26T11:35:39.456605Z","end":"2024-08-26T11:35:39.677526Z","steps":["trace[1758875829] 'agreement among raft nodes before linearized reading'  (duration: 219.61439ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:35:43.674534Z","caller":"traceutil/trace.go:171","msg":"trace[2052732496] linearizableReadLoop","detail":"{readStateIndex:683; appliedIndex:682; }","duration":"218.136437ms","start":"2024-08-26T11:35:43.456366Z","end":"2024-08-26T11:35:43.674503Z","steps":["trace[2052732496] 'read index received'  (duration: 216.688147ms)","trace[2052732496] 'applied index is now lower than readState.Index'  (duration: 1.447745ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T11:35:43.674672Z","caller":"traceutil/trace.go:171","msg":"trace[635935448] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"258.846467ms","start":"2024-08-26T11:35:43.415811Z","end":"2024-08-26T11:35:43.674657Z","steps":["trace[635935448] 'process raft request'  (duration: 257.342594ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T11:35:43.674734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.354641ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T11:35:43.676052Z","caller":"traceutil/trace.go:171","msg":"trace[58905992] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:648; }","duration":"219.674668ms","start":"2024-08-26T11:35:43.456360Z","end":"2024-08-26T11:35:43.676035Z","steps":["trace[58905992] 'agreement among raft nodes before linearized reading'  (duration: 218.343193ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T11:38:54.734190Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-26T11:38:54.734313Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-523807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	{"level":"warn","ts":"2024-08-26T11:38:54.734453Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T11:38:54.734580Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T11:38:54.822722Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-26T11:38:54.822795Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-26T11:38:54.822864Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c9867c1935b8b38d","current-leader-member-id":"c9867c1935b8b38d"}
	{"level":"info","ts":"2024-08-26T11:38:54.825708Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-26T11:38:54.825939Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-26T11:38:54.825986Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-523807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	
	
	==> etcd [7cca73dc657767a8f37e8c7eaf70a63bb5b5789094d99843c66b62f859e7c6cb] <==
	{"level":"info","ts":"2024-08-26T11:40:33.906656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d switched to configuration voters=(14521430496220066701)"}
	{"level":"info","ts":"2024-08-26T11:40:33.910330Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","added-peer-id":"c9867c1935b8b38d","added-peer-peer-urls":["https://192.168.39.26:2380"]}
	{"level":"info","ts":"2024-08-26T11:40:33.910461Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:40:33.910521Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:40:33.921488Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T11:40:33.921793Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c9867c1935b8b38d","initial-advertise-peer-urls":["https://192.168.39.26:2380"],"listen-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.26:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T11:40:33.921836Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T11:40:33.921981Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-26T11:40:33.922004Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-26T11:40:34.963554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-26T11:40:34.963615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-26T11:40:34.963661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgPreVoteResp from c9867c1935b8b38d at term 2"}
	{"level":"info","ts":"2024-08-26T11:40:34.963679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became candidate at term 3"}
	{"level":"info","ts":"2024-08-26T11:40:34.963708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgVoteResp from c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-08-26T11:40:34.963720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became leader at term 3"}
	{"level":"info","ts":"2024-08-26T11:40:34.963727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9867c1935b8b38d elected leader c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-08-26T11:40:34.969167Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c9867c1935b8b38d","local-member-attributes":"{Name:multinode-523807 ClientURLs:[https://192.168.39.26:2379]}","request-path":"/0/members/c9867c1935b8b38d/attributes","cluster-id":"8cfb77a10e566a07","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T11:40:34.969323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T11:40:34.970576Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T11:40:34.971971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.26:2379"}
	{"level":"info","ts":"2024-08-26T11:40:34.972640Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T11:40:34.972760Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T11:40:34.972785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T11:40:34.973573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T11:40:34.974428Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:44:43 up 11 min,  0 users,  load average: 0.01, 0.12, 0.09
	Linux multinode-523807 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5b1205006e3663b3e17998cf64097cd09a35d772520c39e0073a0d87cd199da3] <==
	I0826 11:43:38.442406       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:43:48.442227       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:43:48.443245       1 main.go:299] handling current node
	I0826 11:43:48.443276       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:43:48.443296       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:43:58.442245       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:43:58.442460       1 main.go:299] handling current node
	I0826 11:43:58.442498       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:43:58.442520       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:44:08.441963       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:44:08.442145       1 main.go:299] handling current node
	I0826 11:44:08.442179       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:44:08.442199       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:44:18.450479       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:44:18.450522       1 main.go:299] handling current node
	I0826 11:44:18.450540       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:44:18.450546       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:44:28.449570       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:44:28.449686       1 main.go:299] handling current node
	I0826 11:44:28.449716       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:44:28.449735       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:44:38.442446       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:44:38.442498       1 main.go:299] handling current node
	I0826 11:44:38.442515       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:44:38.442521       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [de944421bc4b9cc21985c6badbf6a0e8e610dcff7402d5aa39edae7dc489c2af] <==
	I0826 11:38:09.532518       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:19.536305       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:38:19.536340       1 main.go:299] handling current node
	I0826 11:38:19.536361       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:38:19.536368       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:38:19.536525       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:38:19.536533       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:29.540749       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:38:29.540795       1 main.go:299] handling current node
	I0826 11:38:29.540809       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:38:29.540815       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:38:29.540964       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:38:29.540983       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:39.539499       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:38:39.539716       1 main.go:299] handling current node
	I0826 11:38:39.539753       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:38:39.539774       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	I0826 11:38:39.539930       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:38:39.539952       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:49.532332       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0826 11:38:49.532407       1 main.go:322] Node multinode-523807-m03 has CIDR [10.244.3.0/24] 
	I0826 11:38:49.532603       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0826 11:38:49.532613       1 main.go:299] handling current node
	I0826 11:38:49.532641       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0826 11:38:49.532646       1 main.go:322] Node multinode-523807-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [50ee5bf6f557844741d254473bea0f08be9831e151e6402bcb9a9c581459a66d] <==
	I0826 11:38:54.744568       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0826 11:38:54.744603       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0826 11:38:54.744624       1 controller.go:132] Ending legacy_token_tracking_controller
	I0826 11:38:54.744630       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0826 11:38:54.744657       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0826 11:38:54.744696       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0826 11:38:54.744712       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	E0826 11:38:54.754373       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.754872       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0826 11:38:54.758734       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0826 11:38:54.759241       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0826 11:38:54.759525       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0826 11:38:54.759554       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0826 11:38:54.759579       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0826 11:38:54.759606       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0826 11:38:54.762613       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0826 11:38:54.762671       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0826 11:38:54.763012       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0826 11:38:54.763233       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0826 11:38:54.763901       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0826 11:38:54.768410       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.768819       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.768939       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.769026       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0826 11:38:54.769306       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [9a1e7fd44c56a80336e34bbbb3fec74b2ba289071e22783ba7ec8689ac06030f] <==
	I0826 11:40:36.315954       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0826 11:40:36.316891       1 aggregator.go:171] initial CRD sync complete...
	I0826 11:40:36.316932       1 autoregister_controller.go:144] Starting autoregister controller
	I0826 11:40:36.316940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0826 11:40:36.320804       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0826 11:40:36.320871       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0826 11:40:36.361148       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0826 11:40:36.383464       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 11:40:36.383554       1 policy_source.go:224] refreshing policies
	I0826 11:40:36.389047       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0826 11:40:36.389174       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0826 11:40:36.389208       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0826 11:40:36.389263       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 11:40:36.394590       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0826 11:40:36.398067       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0826 11:40:36.420677       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0826 11:40:36.420911       1 cache.go:39] Caches are synced for autoregister controller
	I0826 11:40:37.206902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0826 11:40:38.771132       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0826 11:40:38.909023       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0826 11:40:38.927540       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0826 11:40:39.038294       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0826 11:40:39.050699       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0826 11:40:39.767324       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0826 11:40:39.917045       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [076c6b1d077f69d15842e7517d917e028b559383c764eb52ffe7776dfea00470] <==
	I0826 11:36:28.341752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:28.341847       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:36:29.469730       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-523807-m03\" does not exist"
	I0826 11:36:29.469838       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:36:29.484436       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-523807-m03" podCIDRs=["10.244.3.0/24"]
	I0826 11:36:29.484472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:29.487183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:29.494758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:29.882689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:30.217376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:33.405854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:39.805785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:49.174578       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:49.174984       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:36:49.187801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:36:53.407555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:37:28.427283       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m03"
	I0826 11:37:28.427595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:37:28.446304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:37:28.480404       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.156741ms"
	I0826 11:37:28.480579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.108µs"
	I0826 11:37:33.481498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:37:33.492438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:37:33.497777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:37:43.569931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	
	
	==> kube-controller-manager [8562aeb5a0efce3cb391fc8dfa49ef739e7f7d76262647617321daf3c1589f9c] <==
	I0826 11:41:56.832417       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-523807-m03" podCIDRs=["10.244.2.0/24"]
	I0826 11:41:56.832534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:56.834511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:56.837229       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:57.315924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:57.658844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:41:59.877721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:07.118777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:16.126869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:16.127289       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:42:16.137619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:19.795867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:20.884550       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:20.906521       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:21.461205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m03"
	I0826 11:42:21.461231       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-523807-m02"
	I0826 11:42:59.813667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:42:59.834350       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:42:59.840384       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.495694ms"
	I0826 11:42:59.840534       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.888µs"
	I0826 11:43:04.892977       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-523807-m02"
	I0826 11:43:39.627005       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8hw78"
	I0826 11:43:39.653886       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8hw78"
	I0826 11:43:39.654144       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7tjtx"
	I0826 11:43:39.681385       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7tjtx"
	
	
	==> kube-proxy [0e1d877a87d256a5ea7520dfa7a67d6e9f27f3b9f12ef779d680b63ef13918e1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 11:33:55.287964       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 11:33:55.321564       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	E0826 11:33:55.321630       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 11:33:55.395907       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 11:33:55.395944       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 11:33:55.395971       1 server_linux.go:169] "Using iptables Proxier"
	I0826 11:33:55.407040       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 11:33:55.407598       1 server.go:483] "Version info" version="v1.31.0"
	I0826 11:33:55.407705       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:33:55.411488       1 config.go:197] "Starting service config controller"
	I0826 11:33:55.411533       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 11:33:55.411566       1 config.go:104] "Starting endpoint slice config controller"
	I0826 11:33:55.411570       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 11:33:55.412171       1 config.go:326] "Starting node config controller"
	I0826 11:33:55.412193       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 11:33:55.512570       1 shared_informer.go:320] Caches are synced for service config
	I0826 11:33:55.512639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 11:33:55.512878       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f42d1d54cf96f1d2df7419310e3bba5a936fbea51c1aef800296efae8e3c13d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 11:40:37.756250       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 11:40:37.784900       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	E0826 11:40:37.785007       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 11:40:37.828347       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 11:40:37.828409       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 11:40:37.828438       1 server_linux.go:169] "Using iptables Proxier"
	I0826 11:40:37.830839       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 11:40:37.831259       1 server.go:483] "Version info" version="v1.31.0"
	I0826 11:40:37.831287       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:40:37.832710       1 config.go:197] "Starting service config controller"
	I0826 11:40:37.832746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 11:40:37.832764       1 config.go:104] "Starting endpoint slice config controller"
	I0826 11:40:37.832768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 11:40:37.833448       1 config.go:326] "Starting node config controller"
	I0826 11:40:37.833471       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 11:40:37.934121       1 shared_informer.go:320] Caches are synced for node config
	I0826 11:40:37.934160       1 shared_informer.go:320] Caches are synced for service config
	I0826 11:40:37.934169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [37dbc154c98a1d6dfef37a9115dd846fdd9d0e50d81d1b4fa5d17b4618f3f4e5] <==
	E0826 11:33:46.647198       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 11:33:46.647495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 11:33:46.647527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.495704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 11:33:47.495823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.549435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 11:33:47.549588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.551216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0826 11:33:47.551333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.567529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 11:33:47.567775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.575934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0826 11:33:47.576058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.782226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0826 11:33:47.782291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.795162       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 11:33:47.795374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.825537       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 11:33:47.825670       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 11:33:47.836605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0826 11:33:47.838260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 11:33:47.958554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 11:33:47.958675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0826 11:33:49.640136       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0826 11:38:54.747302       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b40b91469f6f2d503d5aace9cf386d04b52bda7b373025a239385802c513a69a] <==
	I0826 11:40:34.626243       1 serving.go:386] Generated self-signed cert in-memory
	W0826 11:40:36.243455       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0826 11:40:36.243499       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 11:40:36.243558       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0826 11:40:36.243570       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0826 11:40:36.343039       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0826 11:40:36.343207       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:40:36.347531       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0826 11:40:36.348397       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0826 11:40:36.361988       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 11:40:36.351321       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0826 11:40:36.462325       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 11:43:32 multinode-523807 kubelet[2966]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:43:32 multinode-523807 kubelet[2966]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:43:32 multinode-523807 kubelet[2966]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:43:32 multinode-523807 kubelet[2966]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:43:32 multinode-523807 kubelet[2966]: E0826 11:43:32.893675    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672612892974731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:43:32 multinode-523807 kubelet[2966]: E0826 11:43:32.893732    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672612892974731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:43:42 multinode-523807 kubelet[2966]: E0826 11:43:42.895973    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672622895245567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:43:42 multinode-523807 kubelet[2966]: E0826 11:43:42.896003    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672622895245567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:43:52 multinode-523807 kubelet[2966]: E0826 11:43:52.901739    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672632897865215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:43:52 multinode-523807 kubelet[2966]: E0826 11:43:52.901817    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672632897865215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:02 multinode-523807 kubelet[2966]: E0826 11:44:02.904454    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672642903219123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:02 multinode-523807 kubelet[2966]: E0826 11:44:02.905248    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672642903219123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:12 multinode-523807 kubelet[2966]: E0826 11:44:12.908375    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672652908046483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:12 multinode-523807 kubelet[2966]: E0826 11:44:12.908506    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672652908046483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:22 multinode-523807 kubelet[2966]: E0826 11:44:22.910899    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672662910638074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:22 multinode-523807 kubelet[2966]: E0826 11:44:22.910923    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672662910638074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:32 multinode-523807 kubelet[2966]: E0826 11:44:32.817725    2966 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 11:44:32 multinode-523807 kubelet[2966]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 11:44:32 multinode-523807 kubelet[2966]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 11:44:32 multinode-523807 kubelet[2966]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 11:44:32 multinode-523807 kubelet[2966]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 11:44:32 multinode-523807 kubelet[2966]: E0826 11:44:32.914598    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672672913307498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:32 multinode-523807 kubelet[2966]: E0826 11:44:32.914639    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672672913307498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:42 multinode-523807 kubelet[2966]: E0826 11:44:42.917041    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672682916737891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 11:44:42 multinode-523807 kubelet[2966]: E0826 11:44:42.917070    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724672682916737891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 11:44:42.527057  137692 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19501-99403/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-523807 -n multinode-523807
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-523807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.52s)

                                                
                                    
x
+
TestPreload (178.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-009774 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0826 11:49:17.399098  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:49:34.329666  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-009774 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m39.121404476s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-009774 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-009774 image pull gcr.io/k8s-minikube/busybox: (2.625587641s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-009774
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-009774: (6.614636481s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-009774 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-009774 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.463150776s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-009774 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-26 11:51:23.111438215 +0000 UTC m=+3888.437103931
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-009774 -n test-preload-009774
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-009774 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-009774 logs -n 25: (1.096423292s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807 sudo cat                                       | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m03_multinode-523807.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt                       | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m02:/home/docker/cp-test_multinode-523807-m03_multinode-523807-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n                                                                 | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | multinode-523807-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-523807 ssh -n multinode-523807-m02 sudo cat                                   | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | /home/docker/cp-test_multinode-523807-m03_multinode-523807-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-523807 node stop m03                                                          | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	| node    | multinode-523807 node start                                                             | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC | 26 Aug 24 11:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-523807                                                                | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC |                     |
	| stop    | -p multinode-523807                                                                     | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:36 UTC |                     |
	| start   | -p multinode-523807                                                                     | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:38 UTC | 26 Aug 24 11:42 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-523807                                                                | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:42 UTC |                     |
	| node    | multinode-523807 node delete                                                            | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:42 UTC | 26 Aug 24 11:42 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-523807 stop                                                                   | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:42 UTC |                     |
	| start   | -p multinode-523807                                                                     | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:44 UTC | 26 Aug 24 11:47 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-523807                                                                | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:47 UTC |                     |
	| start   | -p multinode-523807-m02                                                                 | multinode-523807-m02 | jenkins | v1.33.1 | 26 Aug 24 11:47 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-523807-m03                                                                 | multinode-523807-m03 | jenkins | v1.33.1 | 26 Aug 24 11:47 UTC | 26 Aug 24 11:48 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-523807                                                                 | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:48 UTC |                     |
	| delete  | -p multinode-523807-m03                                                                 | multinode-523807-m03 | jenkins | v1.33.1 | 26 Aug 24 11:48 UTC | 26 Aug 24 11:48 UTC |
	| delete  | -p multinode-523807                                                                     | multinode-523807     | jenkins | v1.33.1 | 26 Aug 24 11:48 UTC | 26 Aug 24 11:48 UTC |
	| start   | -p test-preload-009774                                                                  | test-preload-009774  | jenkins | v1.33.1 | 26 Aug 24 11:48 UTC | 26 Aug 24 11:50 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-009774 image pull                                                          | test-preload-009774  | jenkins | v1.33.1 | 26 Aug 24 11:50 UTC | 26 Aug 24 11:50 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-009774                                                                  | test-preload-009774  | jenkins | v1.33.1 | 26 Aug 24 11:50 UTC | 26 Aug 24 11:50 UTC |
	| start   | -p test-preload-009774                                                                  | test-preload-009774  | jenkins | v1.33.1 | 26 Aug 24 11:50 UTC | 26 Aug 24 11:51 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-009774 image list                                                          | test-preload-009774  | jenkins | v1.33.1 | 26 Aug 24 11:51 UTC | 26 Aug 24 11:51 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 11:50:15
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 11:50:15.469711  140065 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:50:15.469983  140065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:50:15.469993  140065 out.go:358] Setting ErrFile to fd 2...
	I0826 11:50:15.469999  140065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:50:15.470207  140065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:50:15.470772  140065 out.go:352] Setting JSON to false
	I0826 11:50:15.471741  140065 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5556,"bootTime":1724667459,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:50:15.471812  140065 start.go:139] virtualization: kvm guest
	I0826 11:50:15.473950  140065 out.go:177] * [test-preload-009774] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:50:15.475617  140065 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:50:15.475637  140065 notify.go:220] Checking for updates...
	I0826 11:50:15.478326  140065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:50:15.479784  140065 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:50:15.481031  140065 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:50:15.482431  140065 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:50:15.483752  140065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:50:15.485424  140065 config.go:182] Loaded profile config "test-preload-009774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0826 11:50:15.485865  140065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:50:15.485931  140065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:50:15.501264  140065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41607
	I0826 11:50:15.501799  140065 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:50:15.502402  140065 main.go:141] libmachine: Using API Version  1
	I0826 11:50:15.502437  140065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:50:15.502760  140065 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:50:15.502932  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:15.504988  140065 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0826 11:50:15.506305  140065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:50:15.506625  140065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:50:15.506666  140065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:50:15.522135  140065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41167
	I0826 11:50:15.522638  140065 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:50:15.523266  140065 main.go:141] libmachine: Using API Version  1
	I0826 11:50:15.523290  140065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:50:15.523653  140065 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:50:15.523903  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:15.562416  140065 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 11:50:15.563619  140065 start.go:297] selected driver: kvm2
	I0826 11:50:15.563652  140065 start.go:901] validating driver "kvm2" against &{Name:test-preload-009774 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-009774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:50:15.563790  140065 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:50:15.564865  140065 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:50:15.564982  140065 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:50:15.581663  140065 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:50:15.582035  140065 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:50:15.582079  140065 cni.go:84] Creating CNI manager for ""
	I0826 11:50:15.582090  140065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 11:50:15.582139  140065 start.go:340] cluster config:
	{Name:test-preload-009774 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-009774 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:50:15.582244  140065 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:50:15.584074  140065 out.go:177] * Starting "test-preload-009774" primary control-plane node in "test-preload-009774" cluster
	I0826 11:50:15.585224  140065 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0826 11:50:15.988887  140065 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0826 11:50:15.988933  140065 cache.go:56] Caching tarball of preloaded images
	I0826 11:50:15.989079  140065 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0826 11:50:15.990799  140065 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0826 11:50:15.992040  140065 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0826 11:50:16.091071  140065 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0826 11:50:27.609920  140065 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0826 11:50:27.610023  140065 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0826 11:50:28.588648  140065 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0826 11:50:28.588775  140065 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/config.json ...
	I0826 11:50:28.589006  140065 start.go:360] acquireMachinesLock for test-preload-009774: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:50:28.589072  140065 start.go:364] duration metric: took 44.573µs to acquireMachinesLock for "test-preload-009774"
	I0826 11:50:28.589087  140065 start.go:96] Skipping create...Using existing machine configuration
	I0826 11:50:28.589092  140065 fix.go:54] fixHost starting: 
	I0826 11:50:28.589391  140065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:50:28.589425  140065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:50:28.604620  140065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I0826 11:50:28.605146  140065 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:50:28.605667  140065 main.go:141] libmachine: Using API Version  1
	I0826 11:50:28.605701  140065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:50:28.606024  140065 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:50:28.606187  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:28.606332  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetState
	I0826 11:50:28.608487  140065 fix.go:112] recreateIfNeeded on test-preload-009774: state=Stopped err=<nil>
	I0826 11:50:28.608511  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	W0826 11:50:28.608689  140065 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 11:50:28.612025  140065 out.go:177] * Restarting existing kvm2 VM for "test-preload-009774" ...
	I0826 11:50:28.613704  140065 main.go:141] libmachine: (test-preload-009774) Calling .Start
	I0826 11:50:28.613917  140065 main.go:141] libmachine: (test-preload-009774) Ensuring networks are active...
	I0826 11:50:28.614921  140065 main.go:141] libmachine: (test-preload-009774) Ensuring network default is active
	I0826 11:50:28.615283  140065 main.go:141] libmachine: (test-preload-009774) Ensuring network mk-test-preload-009774 is active
	I0826 11:50:28.615807  140065 main.go:141] libmachine: (test-preload-009774) Getting domain xml...
	I0826 11:50:28.616675  140065 main.go:141] libmachine: (test-preload-009774) Creating domain...
	I0826 11:50:29.852955  140065 main.go:141] libmachine: (test-preload-009774) Waiting to get IP...
	I0826 11:50:29.853874  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:29.854292  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:29.854390  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:29.854283  140133 retry.go:31] will retry after 302.880438ms: waiting for machine to come up
	I0826 11:50:30.159251  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:30.159730  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:30.159755  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:30.159705  140133 retry.go:31] will retry after 383.405502ms: waiting for machine to come up
	I0826 11:50:30.545321  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:30.545860  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:30.545896  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:30.545793  140133 retry.go:31] will retry after 470.585869ms: waiting for machine to come up
	I0826 11:50:31.018642  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:31.019059  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:31.019091  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:31.018995  140133 retry.go:31] will retry after 416.859312ms: waiting for machine to come up
	I0826 11:50:31.437694  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:31.438144  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:31.438171  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:31.438086  140133 retry.go:31] will retry after 524.916308ms: waiting for machine to come up
	I0826 11:50:31.964896  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:31.965284  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:31.965309  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:31.965235  140133 retry.go:31] will retry after 886.390777ms: waiting for machine to come up
	I0826 11:50:32.852981  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:32.853522  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:32.853548  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:32.853453  140133 retry.go:31] will retry after 1.082434003s: waiting for machine to come up
	I0826 11:50:33.937144  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:33.937640  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:33.937664  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:33.937577  140133 retry.go:31] will retry after 926.630878ms: waiting for machine to come up
	I0826 11:50:34.865481  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:34.865991  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:34.866022  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:34.865929  140133 retry.go:31] will retry after 1.457611868s: waiting for machine to come up
	I0826 11:50:36.325501  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:36.325970  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:36.326004  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:36.325886  140133 retry.go:31] will retry after 2.203495013s: waiting for machine to come up
	I0826 11:50:38.532340  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:38.532763  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:38.532788  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:38.532716  140133 retry.go:31] will retry after 1.939264187s: waiting for machine to come up
	I0826 11:50:40.473543  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:40.474005  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:40.474046  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:40.473978  140133 retry.go:31] will retry after 2.880705428s: waiting for machine to come up
	I0826 11:50:43.357972  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:43.358441  140065 main.go:141] libmachine: (test-preload-009774) DBG | unable to find current IP address of domain test-preload-009774 in network mk-test-preload-009774
	I0826 11:50:43.358473  140065 main.go:141] libmachine: (test-preload-009774) DBG | I0826 11:50:43.358366  140133 retry.go:31] will retry after 4.274916414s: waiting for machine to come up
	I0826 11:50:47.638342  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.639015  140065 main.go:141] libmachine: (test-preload-009774) Found IP for machine: 192.168.39.142
	I0826 11:50:47.639049  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has current primary IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.639062  140065 main.go:141] libmachine: (test-preload-009774) Reserving static IP address...
	I0826 11:50:47.639601  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "test-preload-009774", mac: "52:54:00:5a:37:47", ip: "192.168.39.142"} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:47.639653  140065 main.go:141] libmachine: (test-preload-009774) Reserved static IP address: 192.168.39.142
	I0826 11:50:47.639667  140065 main.go:141] libmachine: (test-preload-009774) DBG | skip adding static IP to network mk-test-preload-009774 - found existing host DHCP lease matching {name: "test-preload-009774", mac: "52:54:00:5a:37:47", ip: "192.168.39.142"}
	I0826 11:50:47.639696  140065 main.go:141] libmachine: (test-preload-009774) DBG | Getting to WaitForSSH function...
	I0826 11:50:47.639706  140065 main.go:141] libmachine: (test-preload-009774) Waiting for SSH to be available...
	I0826 11:50:47.642097  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.642587  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:47.642633  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.642918  140065 main.go:141] libmachine: (test-preload-009774) DBG | Using SSH client type: external
	I0826 11:50:47.642950  140065 main.go:141] libmachine: (test-preload-009774) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/test-preload-009774/id_rsa (-rw-------)
	I0826 11:50:47.642982  140065 main.go:141] libmachine: (test-preload-009774) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/test-preload-009774/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:50:47.643001  140065 main.go:141] libmachine: (test-preload-009774) DBG | About to run SSH command:
	I0826 11:50:47.643019  140065 main.go:141] libmachine: (test-preload-009774) DBG | exit 0
	I0826 11:50:47.771403  140065 main.go:141] libmachine: (test-preload-009774) DBG | SSH cmd err, output: <nil>: 
	I0826 11:50:47.771903  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetConfigRaw
	I0826 11:50:47.772534  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetIP
	I0826 11:50:47.775164  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.775553  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:47.775588  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.775959  140065 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/config.json ...
	I0826 11:50:47.776203  140065 machine.go:93] provisionDockerMachine start ...
	I0826 11:50:47.776224  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:47.776565  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:47.778673  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.779180  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:47.779210  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.779356  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:47.779584  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:47.779824  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:47.779984  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:47.780181  140065 main.go:141] libmachine: Using SSH client type: native
	I0826 11:50:47.780413  140065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0826 11:50:47.780427  140065 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 11:50:47.895338  140065 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 11:50:47.895369  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetMachineName
	I0826 11:50:47.895628  140065 buildroot.go:166] provisioning hostname "test-preload-009774"
	I0826 11:50:47.895656  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetMachineName
	I0826 11:50:47.895902  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:47.899113  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.899556  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:47.899603  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:47.899824  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:47.900063  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:47.900229  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:47.900410  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:47.900573  140065 main.go:141] libmachine: Using SSH client type: native
	I0826 11:50:47.900767  140065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0826 11:50:47.900784  140065 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-009774 && echo "test-preload-009774" | sudo tee /etc/hostname
	I0826 11:50:48.028887  140065 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-009774
	
	I0826 11:50:48.028919  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:48.031682  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.032012  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.032041  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.032212  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:48.032419  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.032601  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.032747  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:48.032949  140065 main.go:141] libmachine: Using SSH client type: native
	I0826 11:50:48.033155  140065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0826 11:50:48.033181  140065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-009774' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-009774/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-009774' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:50:48.155864  140065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:50:48.155903  140065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:50:48.155936  140065 buildroot.go:174] setting up certificates
	I0826 11:50:48.155949  140065 provision.go:84] configureAuth start
	I0826 11:50:48.155964  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetMachineName
	I0826 11:50:48.156314  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetIP
	I0826 11:50:48.159270  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.159856  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.159886  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.160061  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:48.163009  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.163472  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.163508  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.163700  140065 provision.go:143] copyHostCerts
	I0826 11:50:48.163763  140065 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:50:48.163783  140065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:50:48.163856  140065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:50:48.163940  140065 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:50:48.163947  140065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:50:48.163969  140065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:50:48.164019  140065 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:50:48.164026  140065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:50:48.164046  140065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:50:48.164092  140065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.test-preload-009774 san=[127.0.0.1 192.168.39.142 localhost minikube test-preload-009774]
	I0826 11:50:48.269758  140065 provision.go:177] copyRemoteCerts
	I0826 11:50:48.269826  140065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:50:48.269855  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:48.272669  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.273063  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.273099  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.273350  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:48.273559  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.273776  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:48.273957  140065 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/test-preload-009774/id_rsa Username:docker}
	I0826 11:50:48.361325  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:50:48.388607  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0826 11:50:48.415798  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:50:48.443302  140065 provision.go:87] duration metric: took 287.339231ms to configureAuth
	I0826 11:50:48.443335  140065 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:50:48.443525  140065 config.go:182] Loaded profile config "test-preload-009774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0826 11:50:48.443632  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:48.446882  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.447312  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.447344  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.447520  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:48.447781  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.447963  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.448101  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:48.448308  140065 main.go:141] libmachine: Using SSH client type: native
	I0826 11:50:48.448475  140065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0826 11:50:48.448490  140065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:50:48.728735  140065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:50:48.728768  140065 machine.go:96] duration metric: took 952.549842ms to provisionDockerMachine
	I0826 11:50:48.728780  140065 start.go:293] postStartSetup for "test-preload-009774" (driver="kvm2")
	I0826 11:50:48.728791  140065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:50:48.728807  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:48.729187  140065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:50:48.729225  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:48.732195  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.732652  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.732686  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.732949  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:48.733197  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.733360  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:48.733517  140065 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/test-preload-009774/id_rsa Username:docker}
	I0826 11:50:48.821470  140065 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:50:48.825860  140065 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:50:48.825899  140065 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:50:48.825973  140065 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:50:48.826046  140065 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:50:48.826139  140065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:50:48.835687  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:50:48.860456  140065 start.go:296] duration metric: took 131.660868ms for postStartSetup
	I0826 11:50:48.860507  140065 fix.go:56] duration metric: took 20.271414189s for fixHost
	I0826 11:50:48.860529  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:48.863502  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.863953  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.863983  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.864157  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:48.864376  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.864578  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.864749  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:48.864915  140065 main.go:141] libmachine: Using SSH client type: native
	I0826 11:50:48.865088  140065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0826 11:50:48.865099  140065 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:50:48.979626  140065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724673048.956667330
	
	I0826 11:50:48.979663  140065 fix.go:216] guest clock: 1724673048.956667330
	I0826 11:50:48.979671  140065 fix.go:229] Guest: 2024-08-26 11:50:48.95666733 +0000 UTC Remote: 2024-08-26 11:50:48.860510852 +0000 UTC m=+33.427998184 (delta=96.156478ms)
	I0826 11:50:48.979692  140065 fix.go:200] guest clock delta is within tolerance: 96.156478ms
	I0826 11:50:48.979698  140065 start.go:83] releasing machines lock for "test-preload-009774", held for 20.390617173s
	I0826 11:50:48.979717  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:48.980014  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetIP
	I0826 11:50:48.982879  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.983251  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.983287  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.983456  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:48.984039  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:48.984273  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:50:48.984403  140065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:50:48.984468  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:48.984497  140065 ssh_runner.go:195] Run: cat /version.json
	I0826 11:50:48.984522  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:50:48.987272  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.987502  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.987648  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.987673  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.987785  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:48.987929  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:48.987951  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:48.988056  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.988144  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:50:48.988269  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:48.988340  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:50:48.988430  140065 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/test-preload-009774/id_rsa Username:docker}
	I0826 11:50:48.988509  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:50:48.988678  140065 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/test-preload-009774/id_rsa Username:docker}
	I0826 11:50:49.106767  140065 ssh_runner.go:195] Run: systemctl --version
	I0826 11:50:49.112768  140065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:50:49.255344  140065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:50:49.261560  140065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:50:49.261644  140065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:50:49.277878  140065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:50:49.277905  140065 start.go:495] detecting cgroup driver to use...
	I0826 11:50:49.277981  140065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:50:49.294106  140065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:50:49.308588  140065 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:50:49.308668  140065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:50:49.322725  140065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:50:49.336662  140065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:50:49.451975  140065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:50:49.621277  140065 docker.go:233] disabling docker service ...
	I0826 11:50:49.621357  140065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:50:49.636642  140065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:50:49.650231  140065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:50:49.768637  140065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:50:49.897011  140065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:50:49.911326  140065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:50:49.930189  140065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0826 11:50:49.930274  140065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:50:49.941007  140065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:50:49.941081  140065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:50:49.952126  140065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:50:49.963107  140065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:50:49.973779  140065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:50:49.984946  140065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:50:49.995758  140065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:50:50.013438  140065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:50:50.024209  140065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:50:50.034524  140065 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:50:50.034591  140065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:50:50.049155  140065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:50:50.059433  140065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:50:50.174578  140065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:50:50.312531  140065 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:50:50.312631  140065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:50:50.317933  140065 start.go:563] Will wait 60s for crictl version
	I0826 11:50:50.318010  140065 ssh_runner.go:195] Run: which crictl
	I0826 11:50:50.322332  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:50:50.365476  140065 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:50:50.365573  140065 ssh_runner.go:195] Run: crio --version
	I0826 11:50:50.393942  140065 ssh_runner.go:195] Run: crio --version
	I0826 11:50:50.425202  140065 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0826 11:50:50.426777  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetIP
	I0826 11:50:50.429662  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:50.430166  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:50:50.430207  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:50:50.430435  140065 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 11:50:50.434799  140065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:50:50.447751  140065 kubeadm.go:883] updating cluster {Name:test-preload-009774 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-009774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:50:50.447880  140065 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0826 11:50:50.447940  140065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:50:50.485713  140065 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0826 11:50:50.485786  140065 ssh_runner.go:195] Run: which lz4
	I0826 11:50:50.490134  140065 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 11:50:50.494273  140065 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 11:50:50.494324  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0826 11:50:51.957745  140065 crio.go:462] duration metric: took 1.46764282s to copy over tarball
	I0826 11:50:51.957841  140065 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 11:50:54.395100  140065 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.437216704s)
	I0826 11:50:54.395143  140065 crio.go:469] duration metric: took 2.437366972s to extract the tarball
	I0826 11:50:54.395155  140065 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 11:50:54.435983  140065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:50:54.477361  140065 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0826 11:50:54.477389  140065 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 11:50:54.477481  140065 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:50:54.477500  140065 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 11:50:54.477511  140065 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0826 11:50:54.477483  140065 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0826 11:50:54.477559  140065 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0826 11:50:54.477596  140065 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 11:50:54.477555  140065 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0826 11:50:54.477600  140065 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0826 11:50:54.478948  140065 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:50:54.478964  140065 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0826 11:50:54.478946  140065 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0826 11:50:54.478950  140065 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 11:50:54.478993  140065 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0826 11:50:54.478950  140065 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 11:50:54.479014  140065 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0826 11:50:54.478964  140065 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0826 11:50:54.718757  140065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0826 11:50:54.744588  140065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0826 11:50:54.753540  140065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0826 11:50:54.765134  140065 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0826 11:50:54.765174  140065 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0826 11:50:54.765211  140065 ssh_runner.go:195] Run: which crictl
	I0826 11:50:54.767461  140065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0826 11:50:54.775248  140065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 11:50:54.778484  140065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0826 11:50:54.780417  140065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0826 11:50:54.826157  140065 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0826 11:50:54.826203  140065 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0826 11:50:54.826255  140065 ssh_runner.go:195] Run: which crictl
	I0826 11:50:54.867014  140065 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0826 11:50:54.867068  140065 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0826 11:50:54.867096  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0826 11:50:54.867104  140065 ssh_runner.go:195] Run: which crictl
	I0826 11:50:54.905471  140065 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0826 11:50:54.905516  140065 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0826 11:50:54.905564  140065 ssh_runner.go:195] Run: which crictl
	I0826 11:50:54.909363  140065 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0826 11:50:54.909413  140065 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 11:50:54.909468  140065 ssh_runner.go:195] Run: which crictl
	I0826 11:50:54.928327  140065 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0826 11:50:54.928368  140065 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0826 11:50:54.928403  140065 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0826 11:50:54.928404  140065 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0826 11:50:54.928462  140065 ssh_runner.go:195] Run: which crictl
	I0826 11:50:54.928486  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0826 11:50:54.928463  140065 ssh_runner.go:195] Run: which crictl
	I0826 11:50:54.965026  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0826 11:50:54.965068  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 11:50:54.965029  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0826 11:50:54.965089  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0826 11:50:54.989101  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0826 11:50:54.989233  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0826 11:50:54.989250  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0826 11:50:55.120377  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0826 11:50:55.120413  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0826 11:50:55.120444  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0826 11:50:55.120533  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 11:50:55.120563  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0826 11:50:55.144316  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0826 11:50:55.144378  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0826 11:50:55.278680  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0826 11:50:55.278757  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0826 11:50:55.278792  140065 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0826 11:50:55.278872  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0826 11:50:55.278893  140065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0826 11:50:55.278932  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0826 11:50:55.278966  140065 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0826 11:50:55.279046  140065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0826 11:50:55.283949  140065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0826 11:50:55.363409  140065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:50:55.400984  140065 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0826 11:50:55.401058  140065 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0826 11:50:55.401107  140065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0826 11:50:55.401119  140065 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0826 11:50:55.401156  140065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0826 11:50:55.401178  140065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0826 11:50:55.401178  140065 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0826 11:50:55.401209  140065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0826 11:50:55.401242  140065 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0826 11:50:55.401245  140065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0826 11:50:55.401276  140065 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0826 11:50:55.401333  140065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0826 11:50:55.401359  140065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0826 11:50:55.401222  140065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0826 11:50:55.555221  140065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0826 11:50:55.555305  140065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0826 11:50:55.555336  140065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0826 11:50:55.555366  140065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0826 11:50:58.084949  140065 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.683593167s)
	I0826 11:50:58.084994  140065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0826 11:50:58.085029  140065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.68363586s)
	I0826 11:50:58.085058  140065 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0826 11:50:58.085089  140065 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0826 11:50:58.085144  140065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0826 11:50:58.533983  140065 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0826 11:50:58.534035  140065 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0826 11:50:58.534101  140065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0826 11:50:58.673070  140065 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0826 11:50:58.673129  140065 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0826 11:50:58.673183  140065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0826 11:50:59.416087  140065 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0826 11:50:59.416135  140065 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0826 11:50:59.416189  140065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0826 11:51:00.160123  140065 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0826 11:51:00.160176  140065 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0826 11:51:00.160298  140065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0826 11:51:02.108132  140065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (1.947791948s)
	I0826 11:51:02.108178  140065 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0826 11:51:02.108209  140065 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0826 11:51:02.108254  140065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0826 11:51:02.556740  140065 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0826 11:51:02.556804  140065 cache_images.go:123] Successfully loaded all cached images
	I0826 11:51:02.556813  140065 cache_images.go:92] duration metric: took 8.079409345s to LoadCachedImages
	I0826 11:51:02.556831  140065 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.24.4 crio true true} ...
	I0826 11:51:02.556975  140065 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-009774 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-009774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:51:02.557052  140065 ssh_runner.go:195] Run: crio config
	I0826 11:51:02.609575  140065 cni.go:84] Creating CNI manager for ""
	I0826 11:51:02.609598  140065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 11:51:02.609612  140065 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:51:02.609639  140065 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-009774 NodeName:test-preload-009774 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 11:51:02.609801  140065 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-009774"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:51:02.609892  140065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0826 11:51:02.620396  140065 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:51:02.620481  140065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 11:51:02.630741  140065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0826 11:51:02.647983  140065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:51:02.665100  140065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0826 11:51:02.682811  140065 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0826 11:51:02.686919  140065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:51:02.699716  140065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:51:02.818895  140065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:51:02.836415  140065 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774 for IP: 192.168.39.142
	I0826 11:51:02.836453  140065 certs.go:194] generating shared ca certs ...
	I0826 11:51:02.836476  140065 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:51:02.836685  140065 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:51:02.836749  140065 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:51:02.836764  140065 certs.go:256] generating profile certs ...
	I0826 11:51:02.836885  140065 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/client.key
	I0826 11:51:02.836967  140065 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/apiserver.key.565cd949
	I0826 11:51:02.837040  140065 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/proxy-client.key
	I0826 11:51:02.837208  140065 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:51:02.837254  140065 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:51:02.837269  140065 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:51:02.837296  140065 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:51:02.837319  140065 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:51:02.837352  140065 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:51:02.837391  140065 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:51:02.838163  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:51:02.877218  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:51:02.906791  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:51:02.934151  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:51:02.959296  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0826 11:51:02.990208  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 11:51:03.020507  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:51:03.054884  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:51:03.084835  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:51:03.109539  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:51:03.133685  140065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:51:03.158216  140065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:51:03.175467  140065 ssh_runner.go:195] Run: openssl version
	I0826 11:51:03.181251  140065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:51:03.193021  140065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:51:03.197515  140065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:51:03.197581  140065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:51:03.203512  140065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:51:03.214518  140065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:51:03.225946  140065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:51:03.230414  140065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:51:03.230465  140065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:51:03.236135  140065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:51:03.247114  140065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:51:03.258239  140065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:51:03.262958  140065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:51:03.263025  140065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:51:03.268840  140065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:51:03.280208  140065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:51:03.285061  140065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 11:51:03.291446  140065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 11:51:03.297774  140065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 11:51:03.303936  140065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 11:51:03.309927  140065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 11:51:03.315715  140065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 11:51:03.321647  140065 kubeadm.go:392] StartCluster: {Name:test-preload-009774 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-009774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:51:03.321731  140065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:51:03.321826  140065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:51:03.358798  140065 cri.go:89] found id: ""
	I0826 11:51:03.358906  140065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 11:51:03.369173  140065 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 11:51:03.369196  140065 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 11:51:03.369251  140065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 11:51:03.379068  140065 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:51:03.379622  140065 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-009774" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:51:03.379780  140065 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-009774" cluster setting kubeconfig missing "test-preload-009774" context setting]
	I0826 11:51:03.380131  140065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:51:03.380923  140065 kapi.go:59] client config for test-preload-009774: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/client.crt", KeyFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/client.key", CAFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0826 11:51:03.381725  140065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 11:51:03.391200  140065 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.142
	I0826 11:51:03.391237  140065 kubeadm.go:1160] stopping kube-system containers ...
	I0826 11:51:03.391259  140065 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 11:51:03.391320  140065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:51:03.428990  140065 cri.go:89] found id: ""
	I0826 11:51:03.429066  140065 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 11:51:03.445802  140065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 11:51:03.455624  140065 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 11:51:03.455648  140065 kubeadm.go:157] found existing configuration files:
	
	I0826 11:51:03.455703  140065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 11:51:03.464809  140065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 11:51:03.464893  140065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 11:51:03.474305  140065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 11:51:03.483369  140065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 11:51:03.483441  140065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 11:51:03.492680  140065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 11:51:03.501598  140065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 11:51:03.501669  140065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 11:51:03.511345  140065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 11:51:03.520215  140065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 11:51:03.520283  140065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 11:51:03.529833  140065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 11:51:03.539117  140065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 11:51:03.643463  140065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 11:51:04.102950  140065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 11:51:04.366327  140065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 11:51:04.427837  140065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 11:51:04.495458  140065 api_server.go:52] waiting for apiserver process to appear ...
	I0826 11:51:04.495546  140065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:51:04.995972  140065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:51:05.496227  140065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:51:05.532251  140065 api_server.go:72] duration metric: took 1.036812694s to wait for apiserver process to appear ...
	I0826 11:51:05.532284  140065 api_server.go:88] waiting for apiserver healthz status ...
	I0826 11:51:05.532308  140065 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0826 11:51:05.532873  140065 api_server.go:269] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0826 11:51:06.032700  140065 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0826 11:51:09.233749  140065 api_server.go:279] https://192.168.39.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 11:51:09.233786  140065 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 11:51:09.233805  140065 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0826 11:51:09.274691  140065 api_server.go:279] https://192.168.39.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 11:51:09.274740  140065 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 11:51:09.533129  140065 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0826 11:51:09.538295  140065 api_server.go:279] https://192.168.39.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 11:51:09.538337  140065 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 11:51:10.032901  140065 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0826 11:51:10.038925  140065 api_server.go:279] https://192.168.39.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 11:51:10.038961  140065 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 11:51:10.532417  140065 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0826 11:51:10.539957  140065 api_server.go:279] https://192.168.39.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 11:51:10.539994  140065 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 11:51:11.032533  140065 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0826 11:51:11.038746  140065 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0826 11:51:11.046198  140065 api_server.go:141] control plane version: v1.24.4
	I0826 11:51:11.046228  140065 api_server.go:131] duration metric: took 5.513935745s to wait for apiserver health ...
	I0826 11:51:11.046238  140065 cni.go:84] Creating CNI manager for ""
	I0826 11:51:11.046244  140065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 11:51:11.048056  140065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 11:51:11.049570  140065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 11:51:11.060022  140065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 11:51:11.088098  140065 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 11:51:11.088194  140065 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0826 11:51:11.088212  140065 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0826 11:51:11.100609  140065 system_pods.go:59] 7 kube-system pods found
	I0826 11:51:11.100649  140065 system_pods.go:61] "coredns-6d4b75cb6d-s7742" [30322cea-1fac-44a4-98c3-e2941cc4f826] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 11:51:11.100655  140065 system_pods.go:61] "etcd-test-preload-009774" [52fc7247-3fab-4218-a837-95cd65d4bd52] Running
	I0826 11:51:11.100663  140065 system_pods.go:61] "kube-apiserver-test-preload-009774" [24cc6460-156d-40c3-9b42-c6b2ebf151da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 11:51:11.100668  140065 system_pods.go:61] "kube-controller-manager-test-preload-009774" [4aca24a8-c551-4d45-b3d7-12aa24063b39] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 11:51:11.100674  140065 system_pods.go:61] "kube-proxy-947cx" [a9f434e8-b3e0-4667-921f-8620479bd95d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0826 11:51:11.100678  140065 system_pods.go:61] "kube-scheduler-test-preload-009774" [42167aaf-8b4e-4b6e-a30f-90dd81d69567] Running
	I0826 11:51:11.100683  140065 system_pods.go:61] "storage-provisioner" [2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 11:51:11.100690  140065 system_pods.go:74] duration metric: took 12.568236ms to wait for pod list to return data ...
	I0826 11:51:11.100700  140065 node_conditions.go:102] verifying NodePressure condition ...
	I0826 11:51:11.104379  140065 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:51:11.104417  140065 node_conditions.go:123] node cpu capacity is 2
	I0826 11:51:11.104429  140065 node_conditions.go:105] duration metric: took 3.724976ms to run NodePressure ...
	I0826 11:51:11.104450  140065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 11:51:11.322978  140065 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 11:51:11.327040  140065 kubeadm.go:739] kubelet initialised
	I0826 11:51:11.327063  140065 kubeadm.go:740] duration metric: took 4.05704ms waiting for restarted kubelet to initialise ...
	I0826 11:51:11.327072  140065 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:51:11.331579  140065 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-s7742" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:11.337058  140065 pod_ready.go:98] node "test-preload-009774" hosting pod "coredns-6d4b75cb6d-s7742" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.337087  140065 pod_ready.go:82] duration metric: took 5.478755ms for pod "coredns-6d4b75cb6d-s7742" in "kube-system" namespace to be "Ready" ...
	E0826 11:51:11.337099  140065 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-009774" hosting pod "coredns-6d4b75cb6d-s7742" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.337107  140065 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:11.341337  140065 pod_ready.go:98] node "test-preload-009774" hosting pod "etcd-test-preload-009774" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.341371  140065 pod_ready.go:82] duration metric: took 4.248808ms for pod "etcd-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	E0826 11:51:11.341380  140065 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-009774" hosting pod "etcd-test-preload-009774" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.341386  140065 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:11.346467  140065 pod_ready.go:98] node "test-preload-009774" hosting pod "kube-apiserver-test-preload-009774" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.346495  140065 pod_ready.go:82] duration metric: took 5.099594ms for pod "kube-apiserver-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	E0826 11:51:11.346504  140065 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-009774" hosting pod "kube-apiserver-test-preload-009774" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.346511  140065 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:11.492249  140065 pod_ready.go:98] node "test-preload-009774" hosting pod "kube-controller-manager-test-preload-009774" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.492319  140065 pod_ready.go:82] duration metric: took 145.79659ms for pod "kube-controller-manager-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	E0826 11:51:11.492330  140065 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-009774" hosting pod "kube-controller-manager-test-preload-009774" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.492337  140065 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-947cx" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:11.892392  140065 pod_ready.go:98] node "test-preload-009774" hosting pod "kube-proxy-947cx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.892425  140065 pod_ready.go:82] duration metric: took 400.077892ms for pod "kube-proxy-947cx" in "kube-system" namespace to be "Ready" ...
	E0826 11:51:11.892435  140065 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-009774" hosting pod "kube-proxy-947cx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:11.892441  140065 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:12.293293  140065 pod_ready.go:98] node "test-preload-009774" hosting pod "kube-scheduler-test-preload-009774" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:12.293328  140065 pod_ready.go:82] duration metric: took 400.880246ms for pod "kube-scheduler-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	E0826 11:51:12.293339  140065 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-009774" hosting pod "kube-scheduler-test-preload-009774" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:12.293348  140065 pod_ready.go:39] duration metric: took 966.267408ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:51:12.293367  140065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 11:51:12.305668  140065 ops.go:34] apiserver oom_adj: -16
	I0826 11:51:12.305699  140065 kubeadm.go:597] duration metric: took 8.936495802s to restartPrimaryControlPlane
	I0826 11:51:12.305712  140065 kubeadm.go:394] duration metric: took 8.984072825s to StartCluster
	I0826 11:51:12.305731  140065 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:51:12.305800  140065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:51:12.306416  140065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:51:12.306653  140065 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:51:12.306773  140065 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 11:51:12.306859  140065 config.go:182] Loaded profile config "test-preload-009774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0826 11:51:12.306864  140065 addons.go:69] Setting storage-provisioner=true in profile "test-preload-009774"
	I0826 11:51:12.306880  140065 addons.go:69] Setting default-storageclass=true in profile "test-preload-009774"
	I0826 11:51:12.306898  140065 addons.go:234] Setting addon storage-provisioner=true in "test-preload-009774"
	W0826 11:51:12.306907  140065 addons.go:243] addon storage-provisioner should already be in state true
	I0826 11:51:12.306922  140065 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-009774"
	I0826 11:51:12.306942  140065 host.go:66] Checking if "test-preload-009774" exists ...
	I0826 11:51:12.307263  140065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:51:12.307303  140065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:51:12.307344  140065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:51:12.307391  140065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:51:12.308411  140065 out.go:177] * Verifying Kubernetes components...
	I0826 11:51:12.310419  140065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:51:12.324153  140065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0826 11:51:12.324159  140065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35619
	I0826 11:51:12.324668  140065 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:51:12.324768  140065 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:51:12.325195  140065 main.go:141] libmachine: Using API Version  1
	I0826 11:51:12.325213  140065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:51:12.325347  140065 main.go:141] libmachine: Using API Version  1
	I0826 11:51:12.325387  140065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:51:12.325588  140065 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:51:12.325769  140065 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:51:12.325815  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetState
	I0826 11:51:12.326352  140065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:51:12.326395  140065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:51:12.328096  140065 kapi.go:59] client config for test-preload-009774: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/client.crt", KeyFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/profiles/test-preload-009774/client.key", CAFile:"/home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0826 11:51:12.328384  140065 addons.go:234] Setting addon default-storageclass=true in "test-preload-009774"
	W0826 11:51:12.328397  140065 addons.go:243] addon default-storageclass should already be in state true
	I0826 11:51:12.328430  140065 host.go:66] Checking if "test-preload-009774" exists ...
	I0826 11:51:12.328706  140065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:51:12.328748  140065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:51:12.344199  140065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0826 11:51:12.344702  140065 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:51:12.345405  140065 main.go:141] libmachine: Using API Version  1
	I0826 11:51:12.345451  140065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:51:12.345940  140065 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:51:12.346603  140065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:51:12.346666  140065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:51:12.346909  140065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0826 11:51:12.347372  140065 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:51:12.348017  140065 main.go:141] libmachine: Using API Version  1
	I0826 11:51:12.348048  140065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:51:12.348411  140065 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:51:12.348690  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetState
	I0826 11:51:12.350650  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:51:12.353131  140065 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:51:12.354748  140065 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 11:51:12.354773  140065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 11:51:12.354802  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:51:12.358769  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:51:12.359286  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:51:12.359312  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:51:12.359611  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:51:12.359884  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:51:12.360092  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:51:12.360270  140065 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/test-preload-009774/id_rsa Username:docker}
	I0826 11:51:12.365120  140065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0826 11:51:12.365609  140065 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:51:12.366181  140065 main.go:141] libmachine: Using API Version  1
	I0826 11:51:12.366216  140065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:51:12.366594  140065 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:51:12.366855  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetState
	I0826 11:51:12.369056  140065 main.go:141] libmachine: (test-preload-009774) Calling .DriverName
	I0826 11:51:12.369332  140065 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 11:51:12.369349  140065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 11:51:12.369369  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHHostname
	I0826 11:51:12.372345  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:51:12.372771  140065 main.go:141] libmachine: (test-preload-009774) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:37:47", ip: ""} in network mk-test-preload-009774: {Iface:virbr1 ExpiryTime:2024-08-26 12:50:39 +0000 UTC Type:0 Mac:52:54:00:5a:37:47 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:test-preload-009774 Clientid:01:52:54:00:5a:37:47}
	I0826 11:51:12.372803  140065 main.go:141] libmachine: (test-preload-009774) DBG | domain test-preload-009774 has defined IP address 192.168.39.142 and MAC address 52:54:00:5a:37:47 in network mk-test-preload-009774
	I0826 11:51:12.373165  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHPort
	I0826 11:51:12.373486  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHKeyPath
	I0826 11:51:12.373710  140065 main.go:141] libmachine: (test-preload-009774) Calling .GetSSHUsername
	I0826 11:51:12.373908  140065 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/test-preload-009774/id_rsa Username:docker}
	I0826 11:51:12.478471  140065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:51:12.498692  140065 node_ready.go:35] waiting up to 6m0s for node "test-preload-009774" to be "Ready" ...
	I0826 11:51:12.597970  140065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 11:51:12.628724  140065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 11:51:13.626596  140065 main.go:141] libmachine: Making call to close driver server
	I0826 11:51:13.626630  140065 main.go:141] libmachine: (test-preload-009774) Calling .Close
	I0826 11:51:13.626710  140065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.028702638s)
	I0826 11:51:13.626749  140065 main.go:141] libmachine: Making call to close driver server
	I0826 11:51:13.626763  140065 main.go:141] libmachine: (test-preload-009774) Calling .Close
	I0826 11:51:13.626996  140065 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:51:13.627014  140065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:51:13.627024  140065 main.go:141] libmachine: Making call to close driver server
	I0826 11:51:13.627032  140065 main.go:141] libmachine: (test-preload-009774) Calling .Close
	I0826 11:51:13.627097  140065 main.go:141] libmachine: (test-preload-009774) DBG | Closing plugin on server side
	I0826 11:51:13.627107  140065 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:51:13.627117  140065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:51:13.627128  140065 main.go:141] libmachine: Making call to close driver server
	I0826 11:51:13.627135  140065 main.go:141] libmachine: (test-preload-009774) Calling .Close
	I0826 11:51:13.627249  140065 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:51:13.627264  140065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:51:13.627352  140065 main.go:141] libmachine: (test-preload-009774) DBG | Closing plugin on server side
	I0826 11:51:13.627385  140065 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:51:13.627401  140065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:51:13.635254  140065 main.go:141] libmachine: Making call to close driver server
	I0826 11:51:13.635277  140065 main.go:141] libmachine: (test-preload-009774) Calling .Close
	I0826 11:51:13.635562  140065 main.go:141] libmachine: Successfully made call to close driver server
	I0826 11:51:13.635584  140065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 11:51:13.635607  140065 main.go:141] libmachine: (test-preload-009774) DBG | Closing plugin on server side
	I0826 11:51:13.637667  140065 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0826 11:51:13.638817  140065 addons.go:510] duration metric: took 1.332053828s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0826 11:51:14.502679  140065 node_ready.go:53] node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:16.503644  140065 node_ready.go:53] node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:19.002946  140065 node_ready.go:53] node "test-preload-009774" has status "Ready":"False"
	I0826 11:51:20.002477  140065 node_ready.go:49] node "test-preload-009774" has status "Ready":"True"
	I0826 11:51:20.002509  140065 node_ready.go:38] duration metric: took 7.503771086s for node "test-preload-009774" to be "Ready" ...
	I0826 11:51:20.002523  140065 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:51:20.007410  140065 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-s7742" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:20.012912  140065 pod_ready.go:93] pod "coredns-6d4b75cb6d-s7742" in "kube-system" namespace has status "Ready":"True"
	I0826 11:51:20.012936  140065 pod_ready.go:82] duration metric: took 5.497083ms for pod "coredns-6d4b75cb6d-s7742" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:20.012946  140065 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:21.518265  140065 pod_ready.go:93] pod "etcd-test-preload-009774" in "kube-system" namespace has status "Ready":"True"
	I0826 11:51:21.518305  140065 pod_ready.go:82] duration metric: took 1.505350667s for pod "etcd-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:21.518319  140065 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:21.523799  140065 pod_ready.go:93] pod "kube-apiserver-test-preload-009774" in "kube-system" namespace has status "Ready":"True"
	I0826 11:51:21.523825  140065 pod_ready.go:82] duration metric: took 5.497216ms for pod "kube-apiserver-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:21.523838  140065 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:21.537394  140065 pod_ready.go:93] pod "kube-controller-manager-test-preload-009774" in "kube-system" namespace has status "Ready":"True"
	I0826 11:51:21.537419  140065 pod_ready.go:82] duration metric: took 13.572715ms for pod "kube-controller-manager-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:21.537435  140065 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-947cx" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:21.603755  140065 pod_ready.go:93] pod "kube-proxy-947cx" in "kube-system" namespace has status "Ready":"True"
	I0826 11:51:21.603786  140065 pod_ready.go:82] duration metric: took 66.344588ms for pod "kube-proxy-947cx" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:21.603799  140065 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:22.002475  140065 pod_ready.go:93] pod "kube-scheduler-test-preload-009774" in "kube-system" namespace has status "Ready":"True"
	I0826 11:51:22.002504  140065 pod_ready.go:82] duration metric: took 398.695573ms for pod "kube-scheduler-test-preload-009774" in "kube-system" namespace to be "Ready" ...
	I0826 11:51:22.002520  140065 pod_ready.go:39] duration metric: took 1.999984997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 11:51:22.002537  140065 api_server.go:52] waiting for apiserver process to appear ...
	I0826 11:51:22.002636  140065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:51:22.018061  140065 api_server.go:72] duration metric: took 9.711365509s to wait for apiserver process to appear ...
	I0826 11:51:22.018093  140065 api_server.go:88] waiting for apiserver healthz status ...
	I0826 11:51:22.018115  140065 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0826 11:51:22.024211  140065 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0826 11:51:22.025661  140065 api_server.go:141] control plane version: v1.24.4
	I0826 11:51:22.025686  140065 api_server.go:131] duration metric: took 7.586463ms to wait for apiserver health ...
	I0826 11:51:22.025694  140065 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 11:51:22.206336  140065 system_pods.go:59] 7 kube-system pods found
	I0826 11:51:22.206378  140065 system_pods.go:61] "coredns-6d4b75cb6d-s7742" [30322cea-1fac-44a4-98c3-e2941cc4f826] Running
	I0826 11:51:22.206384  140065 system_pods.go:61] "etcd-test-preload-009774" [52fc7247-3fab-4218-a837-95cd65d4bd52] Running
	I0826 11:51:22.206388  140065 system_pods.go:61] "kube-apiserver-test-preload-009774" [24cc6460-156d-40c3-9b42-c6b2ebf151da] Running
	I0826 11:51:22.206391  140065 system_pods.go:61] "kube-controller-manager-test-preload-009774" [4aca24a8-c551-4d45-b3d7-12aa24063b39] Running
	I0826 11:51:22.206394  140065 system_pods.go:61] "kube-proxy-947cx" [a9f434e8-b3e0-4667-921f-8620479bd95d] Running
	I0826 11:51:22.206397  140065 system_pods.go:61] "kube-scheduler-test-preload-009774" [42167aaf-8b4e-4b6e-a30f-90dd81d69567] Running
	I0826 11:51:22.206403  140065 system_pods.go:61] "storage-provisioner" [2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 11:51:22.206409  140065 system_pods.go:74] duration metric: took 180.709741ms to wait for pod list to return data ...
	I0826 11:51:22.206419  140065 default_sa.go:34] waiting for default service account to be created ...
	I0826 11:51:22.402785  140065 default_sa.go:45] found service account: "default"
	I0826 11:51:22.402821  140065 default_sa.go:55] duration metric: took 196.394222ms for default service account to be created ...
	I0826 11:51:22.402852  140065 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 11:51:22.609797  140065 system_pods.go:86] 7 kube-system pods found
	I0826 11:51:22.609834  140065 system_pods.go:89] "coredns-6d4b75cb6d-s7742" [30322cea-1fac-44a4-98c3-e2941cc4f826] Running
	I0826 11:51:22.609841  140065 system_pods.go:89] "etcd-test-preload-009774" [52fc7247-3fab-4218-a837-95cd65d4bd52] Running
	I0826 11:51:22.609845  140065 system_pods.go:89] "kube-apiserver-test-preload-009774" [24cc6460-156d-40c3-9b42-c6b2ebf151da] Running
	I0826 11:51:22.609851  140065 system_pods.go:89] "kube-controller-manager-test-preload-009774" [4aca24a8-c551-4d45-b3d7-12aa24063b39] Running
	I0826 11:51:22.609856  140065 system_pods.go:89] "kube-proxy-947cx" [a9f434e8-b3e0-4667-921f-8620479bd95d] Running
	I0826 11:51:22.609861  140065 system_pods.go:89] "kube-scheduler-test-preload-009774" [42167aaf-8b4e-4b6e-a30f-90dd81d69567] Running
	I0826 11:51:22.609870  140065 system_pods.go:89] "storage-provisioner" [2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 11:51:22.609888  140065 system_pods.go:126] duration metric: took 207.026209ms to wait for k8s-apps to be running ...
	I0826 11:51:22.609904  140065 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 11:51:22.609966  140065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:51:22.624940  140065 system_svc.go:56] duration metric: took 15.024682ms WaitForService to wait for kubelet
	I0826 11:51:22.624973  140065 kubeadm.go:582] duration metric: took 10.3182862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:51:22.625014  140065 node_conditions.go:102] verifying NodePressure condition ...
	I0826 11:51:22.804070  140065 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 11:51:22.804100  140065 node_conditions.go:123] node cpu capacity is 2
	I0826 11:51:22.804111  140065 node_conditions.go:105] duration metric: took 179.091131ms to run NodePressure ...
	I0826 11:51:22.804124  140065 start.go:241] waiting for startup goroutines ...
	I0826 11:51:22.804131  140065 start.go:246] waiting for cluster config update ...
	I0826 11:51:22.804140  140065 start.go:255] writing updated cluster config ...
	I0826 11:51:22.804393  140065 ssh_runner.go:195] Run: rm -f paused
	I0826 11:51:22.856976  140065 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0826 11:51:22.858921  140065 out.go:201] 
	W0826 11:51:22.860342  140065 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0826 11:51:22.862032  140065 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0826 11:51:22.863481  140065 out.go:177] * Done! kubectl is now configured to use "test-preload-009774" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.766210074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673083766185037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=241f5af5-a218-48c9-9a00-dd11ca7baa91 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.766826982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11c9341e-a02d-40a1-bd7b-b5f6a1b04266 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.766885413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11c9341e-a02d-40a1-bd7b-b5f6a1b04266 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.767115678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a027479019d31039092bddd75a71aa9ccd35007c233fb0c280314b64c5da2ec7,PodSandboxId:bfcb312d3f51fa33f8b98139ace2770bfa5239adc5d366b9ca31447313b1355a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724673077501635611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-s7742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30322cea-1fac-44a4-98c3-e2941cc4f826,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1252d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a8dd7b992d7a0e4a9fd7c387351a131126e9434d95c088768ceaf9c0a385b5c,PodSandboxId:a205d06bd2fba3bd56460105808cc24ed200ffe632bcd1ea5b2c6166e37e3998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724673070643467874,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2,},Annotations:map[string]string{io.kubernetes.container.hash: 69c98071,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7087742ad8fe0b0b24ed6c0ca10f7b502374a99bf79afb40116a37cb7462fa59,PodSandboxId:588abc0823b3cd75d273369d0316e75af811abb3aaec4ce8552019f6b7127221,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724673070242654202,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-947cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f
434e8-b3e0-4667-921f-8620479bd95d,},Annotations:map[string]string{io.kubernetes.container.hash: 933617d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115cc037f0e7ccbf4d11644d2ca77b50a674b7853ef02738d12469014175fb7c,PodSandboxId:7e3d770e8bef85658539ac8b1c02892efb0fffb98aaff681fa86dc16c5ce7b9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724673065246352075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c8fcbdf99e8916b43595511ebbf180,},Annot
ations:map[string]string{io.kubernetes.container.hash: 2aa0afe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3ebedb1134a143b9011555b5b3e68d5684fe194a9d68fe249d043faea42d9,PodSandboxId:974934086a8edc016778d8f4d7d20ab49b6bfe56d549285fbb056cbad25a4bef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724673065231897642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e05c869d70e36368c8fdfb48
e6f33d77,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd691a8cf2b963d5ac3c91ff9bbf96f9d57cd4626ca652104cda638866a0bee,PodSandboxId:6949e1f4eb6bd815b1474a7a38007dfbf2ce3e92d9612cf14b24b98521790cd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724673065193336386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 715ea14649e36e5d9210ae20c9394e64,},
Annotations:map[string]string{io.kubernetes.container.hash: 4bbe99ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14aba64e13773ff5c6807f0bf38574da49e16373d3476db44640ac38ed3b7724,PodSandboxId:c2da35f6e550973abebb0b15c5a4b2e07bb7be1567ff851165e086cfba00ab19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724673065229791429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa571ac07fdf99ddb8d4f6c0234d44d6,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11c9341e-a02d-40a1-bd7b-b5f6a1b04266 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.803108465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc69c123-0102-4e76-bec8-383019625fc2 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.803188726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc69c123-0102-4e76-bec8-383019625fc2 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.804319614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8087f6b2-04da-4a3e-a279-c9bb6ff6e7a7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.804752643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673083804731424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8087f6b2-04da-4a3e-a279-c9bb6ff6e7a7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.805411539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1df569dc-4c91-4384-b012-e1858be83aad name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.805462882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1df569dc-4c91-4384-b012-e1858be83aad name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.805660826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a027479019d31039092bddd75a71aa9ccd35007c233fb0c280314b64c5da2ec7,PodSandboxId:bfcb312d3f51fa33f8b98139ace2770bfa5239adc5d366b9ca31447313b1355a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724673077501635611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-s7742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30322cea-1fac-44a4-98c3-e2941cc4f826,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1252d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a8dd7b992d7a0e4a9fd7c387351a131126e9434d95c088768ceaf9c0a385b5c,PodSandboxId:a205d06bd2fba3bd56460105808cc24ed200ffe632bcd1ea5b2c6166e37e3998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724673070643467874,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2,},Annotations:map[string]string{io.kubernetes.container.hash: 69c98071,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7087742ad8fe0b0b24ed6c0ca10f7b502374a99bf79afb40116a37cb7462fa59,PodSandboxId:588abc0823b3cd75d273369d0316e75af811abb3aaec4ce8552019f6b7127221,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724673070242654202,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-947cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f
434e8-b3e0-4667-921f-8620479bd95d,},Annotations:map[string]string{io.kubernetes.container.hash: 933617d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115cc037f0e7ccbf4d11644d2ca77b50a674b7853ef02738d12469014175fb7c,PodSandboxId:7e3d770e8bef85658539ac8b1c02892efb0fffb98aaff681fa86dc16c5ce7b9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724673065246352075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c8fcbdf99e8916b43595511ebbf180,},Annot
ations:map[string]string{io.kubernetes.container.hash: 2aa0afe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3ebedb1134a143b9011555b5b3e68d5684fe194a9d68fe249d043faea42d9,PodSandboxId:974934086a8edc016778d8f4d7d20ab49b6bfe56d549285fbb056cbad25a4bef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724673065231897642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e05c869d70e36368c8fdfb48
e6f33d77,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd691a8cf2b963d5ac3c91ff9bbf96f9d57cd4626ca652104cda638866a0bee,PodSandboxId:6949e1f4eb6bd815b1474a7a38007dfbf2ce3e92d9612cf14b24b98521790cd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724673065193336386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 715ea14649e36e5d9210ae20c9394e64,},
Annotations:map[string]string{io.kubernetes.container.hash: 4bbe99ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14aba64e13773ff5c6807f0bf38574da49e16373d3476db44640ac38ed3b7724,PodSandboxId:c2da35f6e550973abebb0b15c5a4b2e07bb7be1567ff851165e086cfba00ab19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724673065229791429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa571ac07fdf99ddb8d4f6c0234d44d6,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1df569dc-4c91-4384-b012-e1858be83aad name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.847364439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8dda7fb8-16e7-4751-b8ce-bafe50f3c3bc name=/runtime.v1.RuntimeService/Version
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.847459936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8dda7fb8-16e7-4751-b8ce-bafe50f3c3bc name=/runtime.v1.RuntimeService/Version
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.848886337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78fbcafc-3a0f-41bf-8190-4c02266bb63f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.849374653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673083849350164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78fbcafc-3a0f-41bf-8190-4c02266bb63f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.849866611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=581f2990-7456-4c65-9965-d9db125d5710 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.849975675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=581f2990-7456-4c65-9965-d9db125d5710 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.850154970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a027479019d31039092bddd75a71aa9ccd35007c233fb0c280314b64c5da2ec7,PodSandboxId:bfcb312d3f51fa33f8b98139ace2770bfa5239adc5d366b9ca31447313b1355a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724673077501635611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-s7742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30322cea-1fac-44a4-98c3-e2941cc4f826,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1252d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a8dd7b992d7a0e4a9fd7c387351a131126e9434d95c088768ceaf9c0a385b5c,PodSandboxId:a205d06bd2fba3bd56460105808cc24ed200ffe632bcd1ea5b2c6166e37e3998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724673070643467874,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2,},Annotations:map[string]string{io.kubernetes.container.hash: 69c98071,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7087742ad8fe0b0b24ed6c0ca10f7b502374a99bf79afb40116a37cb7462fa59,PodSandboxId:588abc0823b3cd75d273369d0316e75af811abb3aaec4ce8552019f6b7127221,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724673070242654202,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-947cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f
434e8-b3e0-4667-921f-8620479bd95d,},Annotations:map[string]string{io.kubernetes.container.hash: 933617d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115cc037f0e7ccbf4d11644d2ca77b50a674b7853ef02738d12469014175fb7c,PodSandboxId:7e3d770e8bef85658539ac8b1c02892efb0fffb98aaff681fa86dc16c5ce7b9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724673065246352075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c8fcbdf99e8916b43595511ebbf180,},Annot
ations:map[string]string{io.kubernetes.container.hash: 2aa0afe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3ebedb1134a143b9011555b5b3e68d5684fe194a9d68fe249d043faea42d9,PodSandboxId:974934086a8edc016778d8f4d7d20ab49b6bfe56d549285fbb056cbad25a4bef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724673065231897642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e05c869d70e36368c8fdfb48
e6f33d77,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd691a8cf2b963d5ac3c91ff9bbf96f9d57cd4626ca652104cda638866a0bee,PodSandboxId:6949e1f4eb6bd815b1474a7a38007dfbf2ce3e92d9612cf14b24b98521790cd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724673065193336386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 715ea14649e36e5d9210ae20c9394e64,},
Annotations:map[string]string{io.kubernetes.container.hash: 4bbe99ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14aba64e13773ff5c6807f0bf38574da49e16373d3476db44640ac38ed3b7724,PodSandboxId:c2da35f6e550973abebb0b15c5a4b2e07bb7be1567ff851165e086cfba00ab19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724673065229791429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa571ac07fdf99ddb8d4f6c0234d44d6,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=581f2990-7456-4c65-9965-d9db125d5710 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.883902608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1813a6ac-ac0a-482f-921a-0b7c8311cf75 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.884027547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1813a6ac-ac0a-482f-921a-0b7c8311cf75 name=/runtime.v1.RuntimeService/Version
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.885312959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55405a5a-99f9-424a-b41a-889b9948a440 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.885744012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673083885720568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55405a5a-99f9-424a-b41a-889b9948a440 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.886462047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57b5976c-3cfe-44d6-93bb-a5164a32df39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.886531157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57b5976c-3cfe-44d6-93bb-a5164a32df39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 11:51:23 test-preload-009774 crio[679]: time="2024-08-26 11:51:23.886770315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a027479019d31039092bddd75a71aa9ccd35007c233fb0c280314b64c5da2ec7,PodSandboxId:bfcb312d3f51fa33f8b98139ace2770bfa5239adc5d366b9ca31447313b1355a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724673077501635611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-s7742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30322cea-1fac-44a4-98c3-e2941cc4f826,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1252d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a8dd7b992d7a0e4a9fd7c387351a131126e9434d95c088768ceaf9c0a385b5c,PodSandboxId:a205d06bd2fba3bd56460105808cc24ed200ffe632bcd1ea5b2c6166e37e3998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724673070643467874,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2,},Annotations:map[string]string{io.kubernetes.container.hash: 69c98071,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7087742ad8fe0b0b24ed6c0ca10f7b502374a99bf79afb40116a37cb7462fa59,PodSandboxId:588abc0823b3cd75d273369d0316e75af811abb3aaec4ce8552019f6b7127221,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724673070242654202,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-947cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f
434e8-b3e0-4667-921f-8620479bd95d,},Annotations:map[string]string{io.kubernetes.container.hash: 933617d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115cc037f0e7ccbf4d11644d2ca77b50a674b7853ef02738d12469014175fb7c,PodSandboxId:7e3d770e8bef85658539ac8b1c02892efb0fffb98aaff681fa86dc16c5ce7b9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724673065246352075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c8fcbdf99e8916b43595511ebbf180,},Annot
ations:map[string]string{io.kubernetes.container.hash: 2aa0afe7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3ebedb1134a143b9011555b5b3e68d5684fe194a9d68fe249d043faea42d9,PodSandboxId:974934086a8edc016778d8f4d7d20ab49b6bfe56d549285fbb056cbad25a4bef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724673065231897642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e05c869d70e36368c8fdfb48
e6f33d77,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd691a8cf2b963d5ac3c91ff9bbf96f9d57cd4626ca652104cda638866a0bee,PodSandboxId:6949e1f4eb6bd815b1474a7a38007dfbf2ce3e92d9612cf14b24b98521790cd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724673065193336386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 715ea14649e36e5d9210ae20c9394e64,},
Annotations:map[string]string{io.kubernetes.container.hash: 4bbe99ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14aba64e13773ff5c6807f0bf38574da49e16373d3476db44640ac38ed3b7724,PodSandboxId:c2da35f6e550973abebb0b15c5a4b2e07bb7be1567ff851165e086cfba00ab19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724673065229791429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-009774,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa571ac07fdf99ddb8d4f6c0234d44d6,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57b5976c-3cfe-44d6-93bb-a5164a32df39 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a027479019d31       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   bfcb312d3f51f       coredns-6d4b75cb6d-s7742
	4a8dd7b992d7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Exited              storage-provisioner       2                   a205d06bd2fba       storage-provisioner
	7087742ad8fe0       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   588abc0823b3c       kube-proxy-947cx
	115cc037f0e7c       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   7e3d770e8bef8       etcd-test-preload-009774
	89d3ebedb1134       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   974934086a8ed       kube-controller-manager-test-preload-009774
	14aba64e13773       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   c2da35f6e5509       kube-scheduler-test-preload-009774
	ddd691a8cf2b9       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   6949e1f4eb6bd       kube-apiserver-test-preload-009774
	
	
	==> coredns [a027479019d31039092bddd75a71aa9ccd35007c233fb0c280314b64c5da2ec7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:60283 - 1137 "HINFO IN 4159213815826737207.8300845022572266221. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015692038s
	
	
	==> describe nodes <==
	Name:               test-preload-009774
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-009774
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=test-preload-009774
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_49_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:49:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-009774
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 11:51:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 11:51:19 +0000   Mon, 26 Aug 2024 11:49:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 11:51:19 +0000   Mon, 26 Aug 2024 11:49:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 11:51:19 +0000   Mon, 26 Aug 2024 11:49:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 11:51:19 +0000   Mon, 26 Aug 2024 11:51:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    test-preload-009774
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ae450a8dce948b9b6db70ed8f316f9c
	  System UUID:                2ae450a8-dce9-48b9-b6db-70ed8f316f9c
	  Boot ID:                    10cdf695-9b77-4037-8b73-e06209cb381c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-s7742                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 etcd-test-preload-009774                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         108s
	  kube-system                 kube-apiserver-test-preload-009774             250m (12%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-test-preload-009774    200m (10%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-947cx                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-test-preload-009774             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  113s (x4 over 114s)  kubelet          Node test-preload-009774 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x4 over 114s)  kubelet          Node test-preload-009774 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 114s)  kubelet          Node test-preload-009774 status is now: NodeHasSufficientPID
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  106s                 kubelet          Node test-preload-009774 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s                 kubelet          Node test-preload-009774 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s                 kubelet          Node test-preload-009774 status is now: NodeHasSufficientPID
	  Normal  NodeReady                96s                  kubelet          Node test-preload-009774 status is now: NodeReady
	  Normal  RegisteredNode           93s                  node-controller  Node test-preload-009774 event: Registered Node test-preload-009774 in Controller
	  Normal  Starting                 20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node test-preload-009774 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node test-preload-009774 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node test-preload-009774 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node test-preload-009774 event: Registered Node test-preload-009774 in Controller
	
	
	==> dmesg <==
	[Aug26 11:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050937] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037924] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.778840] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.895427] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.557512] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.233188] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.060514] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065916] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.190864] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127006] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.285241] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[Aug26 11:51] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.058305] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.478485] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +5.200312] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.890143] systemd-fstab-generator[1810]: Ignoring "noauto" option for root device
	[  +4.945942] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [115cc037f0e7ccbf4d11644d2ca77b50a674b7853ef02738d12469014175fb7c] <==
	{"level":"info","ts":"2024-08-26T11:51:05.658Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d7a5d3e20a6b0ba7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-26T11:51:05.658Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-26T11:51:05.659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 switched to configuration voters=(15539059057102621607)"}
	{"level":"info","ts":"2024-08-26T11:51:05.661Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f7d6b5428c0c9dc0","local-member-id":"d7a5d3e20a6b0ba7","added-peer-id":"d7a5d3e20a6b0ba7","added-peer-peer-urls":["https://192.168.39.142:2380"]}
	{"level":"info","ts":"2024-08-26T11:51:05.662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f7d6b5428c0c9dc0","local-member-id":"d7a5d3e20a6b0ba7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:51:05.662Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T11:51:05.667Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d7a5d3e20a6b0ba7","initial-advertise-peer-urls":["https://192.168.39.142:2380"],"listen-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.142:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T11:51:05.667Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T11:51:05.662Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2024-08-26T11:51:05.667Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2024-08-26T11:51:05.667Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T11:51:06.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-26T11:51:06.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-26T11:51:06.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgPreVoteResp from d7a5d3e20a6b0ba7 at term 2"}
	{"level":"info","ts":"2024-08-26T11:51:06.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became candidate at term 3"}
	{"level":"info","ts":"2024-08-26T11:51:06.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgVoteResp from d7a5d3e20a6b0ba7 at term 3"}
	{"level":"info","ts":"2024-08-26T11:51:06.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became leader at term 3"}
	{"level":"info","ts":"2024-08-26T11:51:06.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7a5d3e20a6b0ba7 elected leader d7a5d3e20a6b0ba7 at term 3"}
	{"level":"info","ts":"2024-08-26T11:51:06.698Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d7a5d3e20a6b0ba7","local-member-attributes":"{Name:test-preload-009774 ClientURLs:[https://192.168.39.142:2379]}","request-path":"/0/members/d7a5d3e20a6b0ba7/attributes","cluster-id":"f7d6b5428c0c9dc0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T11:51:06.698Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T11:51:06.699Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T11:51:06.700Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T11:51:06.704Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.142:2379"}
	{"level":"info","ts":"2024-08-26T11:51:06.704Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T11:51:06.704Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:51:24 up 0 min,  0 users,  load average: 1.01, 0.26, 0.09
	Linux test-preload-009774 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ddd691a8cf2b963d5ac3c91ff9bbf96f9d57cd4626ca652104cda638866a0bee] <==
	I0826 11:51:09.185406       1 establishing_controller.go:76] Starting EstablishingController
	I0826 11:51:09.185441       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0826 11:51:09.185469       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0826 11:51:09.185490       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0826 11:51:09.190229       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0826 11:51:09.207285       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0826 11:51:09.284574       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0826 11:51:09.284923       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 11:51:09.300003       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0826 11:51:09.300375       1 cache.go:39] Caches are synced for autoregister controller
	I0826 11:51:09.300574       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0826 11:51:09.300611       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0826 11:51:09.300895       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0826 11:51:09.301350       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0826 11:51:09.305494       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0826 11:51:09.869704       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0826 11:51:10.184537       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0826 11:51:10.797836       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0826 11:51:11.200063       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0826 11:51:11.212128       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0826 11:51:11.274340       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0826 11:51:11.294579       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0826 11:51:11.305228       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0826 11:51:21.785359       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0826 11:51:21.804252       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [89d3ebedb1134a143b9011555b5b3e68d5684fe194a9d68fe249d043faea42d9] <==
	I0826 11:51:21.661534       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0826 11:51:21.663828       1 shared_informer.go:262] Caches are synced for node
	I0826 11:51:21.663930       1 range_allocator.go:173] Starting range CIDR allocator
	I0826 11:51:21.663973       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0826 11:51:21.663986       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0826 11:51:21.665310       1 shared_informer.go:262] Caches are synced for deployment
	I0826 11:51:21.667709       1 shared_informer.go:262] Caches are synced for PV protection
	I0826 11:51:21.670201       1 shared_informer.go:262] Caches are synced for ephemeral
	I0826 11:51:21.672653       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0826 11:51:21.677435       1 shared_informer.go:262] Caches are synced for persistent volume
	I0826 11:51:21.681107       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0826 11:51:21.695307       1 shared_informer.go:262] Caches are synced for GC
	I0826 11:51:21.709863       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0826 11:51:21.743914       1 shared_informer.go:262] Caches are synced for daemon sets
	I0826 11:51:21.768437       1 shared_informer.go:262] Caches are synced for attach detach
	I0826 11:51:21.773865       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0826 11:51:21.792404       1 shared_informer.go:262] Caches are synced for endpoint
	I0826 11:51:21.810676       1 shared_informer.go:262] Caches are synced for resource quota
	I0826 11:51:21.839390       1 shared_informer.go:262] Caches are synced for resource quota
	I0826 11:51:21.854678       1 shared_informer.go:262] Caches are synced for disruption
	I0826 11:51:21.854720       1 disruption.go:371] Sending events to api server.
	I0826 11:51:21.884722       1 shared_informer.go:262] Caches are synced for stateful set
	I0826 11:51:22.304382       1 shared_informer.go:262] Caches are synced for garbage collector
	I0826 11:51:22.319292       1 shared_informer.go:262] Caches are synced for garbage collector
	I0826 11:51:22.319328       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [7087742ad8fe0b0b24ed6c0ca10f7b502374a99bf79afb40116a37cb7462fa59] <==
	I0826 11:51:10.725858       1 node.go:163] Successfully retrieved node IP: 192.168.39.142
	I0826 11:51:10.725969       1 server_others.go:138] "Detected node IP" address="192.168.39.142"
	I0826 11:51:10.726208       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0826 11:51:10.787429       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0826 11:51:10.787474       1 server_others.go:206] "Using iptables Proxier"
	I0826 11:51:10.788160       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0826 11:51:10.789256       1 server.go:661] "Version info" version="v1.24.4"
	I0826 11:51:10.789280       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:51:10.790582       1 config.go:317] "Starting service config controller"
	I0826 11:51:10.790827       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0826 11:51:10.790867       1 config.go:226] "Starting endpoint slice config controller"
	I0826 11:51:10.790873       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0826 11:51:10.792208       1 config.go:444] "Starting node config controller"
	I0826 11:51:10.792230       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0826 11:51:10.891640       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0826 11:51:10.891692       1 shared_informer.go:262] Caches are synced for service config
	I0826 11:51:10.892986       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [14aba64e13773ff5c6807f0bf38574da49e16373d3476db44640ac38ed3b7724] <==
	I0826 11:51:06.385387       1 serving.go:348] Generated self-signed cert in-memory
	W0826 11:51:09.246008       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0826 11:51:09.248011       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 11:51:09.248090       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0826 11:51:09.248100       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0826 11:51:09.304130       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0826 11:51:09.304371       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 11:51:09.309386       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0826 11:51:09.309478       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0826 11:51:09.309629       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0826 11:51:09.309738       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 11:51:09.410039       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563314    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68msg\" (UniqueName: \"kubernetes.io/projected/2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2-kube-api-access-68msg\") pod \"storage-provisioner\" (UID: \"2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2\") " pod="kube-system/storage-provisioner"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563538    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f434e8-b3e0-4667-921f-8620479bd95d-lib-modules\") pod \"kube-proxy-947cx\" (UID: \"a9f434e8-b3e0-4667-921f-8620479bd95d\") " pod="kube-system/kube-proxy-947cx"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563596    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbkc8\" (UniqueName: \"kubernetes.io/projected/a9f434e8-b3e0-4667-921f-8620479bd95d-kube-api-access-kbkc8\") pod \"kube-proxy-947cx\" (UID: \"a9f434e8-b3e0-4667-921f-8620479bd95d\") " pod="kube-system/kube-proxy-947cx"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563621    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqzvb\" (UniqueName: \"kubernetes.io/projected/30322cea-1fac-44a4-98c3-e2941cc4f826-kube-api-access-xqzvb\") pod \"coredns-6d4b75cb6d-s7742\" (UID: \"30322cea-1fac-44a4-98c3-e2941cc4f826\") " pod="kube-system/coredns-6d4b75cb6d-s7742"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563642    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume\") pod \"coredns-6d4b75cb6d-s7742\" (UID: \"30322cea-1fac-44a4-98c3-e2941cc4f826\") " pod="kube-system/coredns-6d4b75cb6d-s7742"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563661    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a9f434e8-b3e0-4667-921f-8620479bd95d-kube-proxy\") pod \"kube-proxy-947cx\" (UID: \"a9f434e8-b3e0-4667-921f-8620479bd95d\") " pod="kube-system/kube-proxy-947cx"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563691    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9f434e8-b3e0-4667-921f-8620479bd95d-xtables-lock\") pod \"kube-proxy-947cx\" (UID: \"a9f434e8-b3e0-4667-921f-8620479bd95d\") " pod="kube-system/kube-proxy-947cx"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563710    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2-tmp\") pod \"storage-provisioner\" (UID: \"2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2\") " pod="kube-system/storage-provisioner"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: I0826 11:51:09.563729    1136 reconciler.go:159] "Reconciler: start to sync state"
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: E0826 11:51:09.667395    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 26 11:51:09 test-preload-009774 kubelet[1136]: E0826 11:51:09.667535    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume podName:30322cea-1fac-44a4-98c3-e2941cc4f826 nodeName:}" failed. No retries permitted until 2024-08-26 11:51:10.167494273 +0000 UTC m=+5.808409825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume") pod "coredns-6d4b75cb6d-s7742" (UID: "30322cea-1fac-44a4-98c3-e2941cc4f826") : object "kube-system"/"coredns" not registered
	Aug 26 11:51:10 test-preload-009774 kubelet[1136]: E0826 11:51:10.170568    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 26 11:51:10 test-preload-009774 kubelet[1136]: E0826 11:51:10.170660    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume podName:30322cea-1fac-44a4-98c3-e2941cc4f826 nodeName:}" failed. No retries permitted until 2024-08-26 11:51:11.170643831 +0000 UTC m=+6.811559396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume") pod "coredns-6d4b75cb6d-s7742" (UID: "30322cea-1fac-44a4-98c3-e2941cc4f826") : object "kube-system"/"coredns" not registered
	Aug 26 11:51:10 test-preload-009774 kubelet[1136]: E0826 11:51:10.592038    1136 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-s7742" podUID=30322cea-1fac-44a4-98c3-e2941cc4f826
	Aug 26 11:51:10 test-preload-009774 kubelet[1136]: I0826 11:51:10.628807    1136 scope.go:110] "RemoveContainer" containerID="a9c6626a8edd16c812e256713d205bdbda9e335ac2ec60577401a73ce1662804"
	Aug 26 11:51:11 test-preload-009774 kubelet[1136]: E0826 11:51:11.178552    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 26 11:51:11 test-preload-009774 kubelet[1136]: E0826 11:51:11.178673    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume podName:30322cea-1fac-44a4-98c3-e2941cc4f826 nodeName:}" failed. No retries permitted until 2024-08-26 11:51:13.17865281 +0000 UTC m=+8.819568377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume") pod "coredns-6d4b75cb6d-s7742" (UID: "30322cea-1fac-44a4-98c3-e2941cc4f826") : object "kube-system"/"coredns" not registered
	Aug 26 11:51:11 test-preload-009774 kubelet[1136]: I0826 11:51:11.640810    1136 scope.go:110] "RemoveContainer" containerID="4a8dd7b992d7a0e4a9fd7c387351a131126e9434d95c088768ceaf9c0a385b5c"
	Aug 26 11:51:11 test-preload-009774 kubelet[1136]: E0826 11:51:11.640990    1136 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2)\"" pod="kube-system/storage-provisioner" podUID=2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2
	Aug 26 11:51:11 test-preload-009774 kubelet[1136]: I0826 11:51:11.641083    1136 scope.go:110] "RemoveContainer" containerID="a9c6626a8edd16c812e256713d205bdbda9e335ac2ec60577401a73ce1662804"
	Aug 26 11:51:12 test-preload-009774 kubelet[1136]: E0826 11:51:12.591770    1136 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-s7742" podUID=30322cea-1fac-44a4-98c3-e2941cc4f826
	Aug 26 11:51:12 test-preload-009774 kubelet[1136]: I0826 11:51:12.646760    1136 scope.go:110] "RemoveContainer" containerID="4a8dd7b992d7a0e4a9fd7c387351a131126e9434d95c088768ceaf9c0a385b5c"
	Aug 26 11:51:12 test-preload-009774 kubelet[1136]: E0826 11:51:12.647372    1136 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2)\"" pod="kube-system/storage-provisioner" podUID=2848e7f0-c350-46ed-bd05-d1a2ca7fbaa2
	Aug 26 11:51:13 test-preload-009774 kubelet[1136]: E0826 11:51:13.192178    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 26 11:51:13 test-preload-009774 kubelet[1136]: E0826 11:51:13.192310    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume podName:30322cea-1fac-44a4-98c3-e2941cc4f826 nodeName:}" failed. No retries permitted until 2024-08-26 11:51:17.192272812 +0000 UTC m=+12.833188376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30322cea-1fac-44a4-98c3-e2941cc4f826-config-volume") pod "coredns-6d4b75cb6d-s7742" (UID: "30322cea-1fac-44a4-98c3-e2941cc4f826") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [4a8dd7b992d7a0e4a9fd7c387351a131126e9434d95c088768ceaf9c0a385b5c] <==
	I0826 11:51:10.751686       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0826 11:51:10.754863       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-009774 -n test-preload-009774
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-009774 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-009774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-009774
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-009774: (1.170232016s)
--- FAIL: TestPreload (178.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (387.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0826 11:54:34.326939  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m25.492897803s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-117510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-117510" primary control-plane node in "kubernetes-upgrade-117510" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:54:07.157788  142196 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:54:07.158036  142196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:54:07.158044  142196 out.go:358] Setting ErrFile to fd 2...
	I0826 11:54:07.158048  142196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:54:07.158220  142196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:54:07.158796  142196 out.go:352] Setting JSON to false
	I0826 11:54:07.159721  142196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5788,"bootTime":1724667459,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:54:07.159786  142196 start.go:139] virtualization: kvm guest
	I0826 11:54:07.162119  142196 out.go:177] * [kubernetes-upgrade-117510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:54:07.163386  142196 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:54:07.163409  142196 notify.go:220] Checking for updates...
	I0826 11:54:07.166000  142196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:54:07.167381  142196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:54:07.168642  142196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:54:07.170025  142196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:54:07.171383  142196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:54:07.173077  142196 config.go:182] Loaded profile config "NoKubernetes-533322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:54:07.173195  142196 config.go:182] Loaded profile config "offline-crio-511327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:54:07.173267  142196 config.go:182] Loaded profile config "running-upgrade-669690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0826 11:54:07.173356  142196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:54:07.210519  142196 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 11:54:07.211779  142196 start.go:297] selected driver: kvm2
	I0826 11:54:07.211794  142196 start.go:901] validating driver "kvm2" against <nil>
	I0826 11:54:07.211810  142196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:54:07.212642  142196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:54:07.212756  142196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:54:07.229784  142196 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:54:07.229840  142196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 11:54:07.230066  142196 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 11:54:07.230131  142196 cni.go:84] Creating CNI manager for ""
	I0826 11:54:07.230152  142196 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 11:54:07.230166  142196 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 11:54:07.230232  142196 start.go:340] cluster config:
	{Name:kubernetes-upgrade-117510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:54:07.230344  142196 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:54:07.232253  142196 out.go:177] * Starting "kubernetes-upgrade-117510" primary control-plane node in "kubernetes-upgrade-117510" cluster
	I0826 11:54:07.233672  142196 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 11:54:07.233711  142196 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:54:07.233729  142196 cache.go:56] Caching tarball of preloaded images
	I0826 11:54:07.233807  142196 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:54:07.233817  142196 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0826 11:54:07.233912  142196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/config.json ...
	I0826 11:54:07.233933  142196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/config.json: {Name:mk2ca7f6697c9d744ff199df41717270f0f407e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:54:07.234105  142196 start.go:360] acquireMachinesLock for kubernetes-upgrade-117510: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:55:04.367605  142196 start.go:364] duration metric: took 57.133452366s to acquireMachinesLock for "kubernetes-upgrade-117510"
	I0826 11:55:04.367703  142196 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-117510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:55:04.367841  142196 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 11:55:04.369902  142196 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 11:55:04.370248  142196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:55:04.370301  142196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:55:04.391456  142196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39009
	I0826 11:55:04.392006  142196 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:55:04.392744  142196 main.go:141] libmachine: Using API Version  1
	I0826 11:55:04.392774  142196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:55:04.393162  142196 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:55:04.393380  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetMachineName
	I0826 11:55:04.393542  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 11:55:04.393760  142196 start.go:159] libmachine.API.Create for "kubernetes-upgrade-117510" (driver="kvm2")
	I0826 11:55:04.393792  142196 client.go:168] LocalClient.Create starting
	I0826 11:55:04.393832  142196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 11:55:04.393880  142196 main.go:141] libmachine: Decoding PEM data...
	I0826 11:55:04.393900  142196 main.go:141] libmachine: Parsing certificate...
	I0826 11:55:04.393974  142196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 11:55:04.394002  142196 main.go:141] libmachine: Decoding PEM data...
	I0826 11:55:04.394020  142196 main.go:141] libmachine: Parsing certificate...
	I0826 11:55:04.394045  142196 main.go:141] libmachine: Running pre-create checks...
	I0826 11:55:04.394059  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .PreCreateCheck
	I0826 11:55:04.394396  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetConfigRaw
	I0826 11:55:04.394922  142196 main.go:141] libmachine: Creating machine...
	I0826 11:55:04.394940  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .Create
	I0826 11:55:04.395103  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Creating KVM machine...
	I0826 11:55:04.396477  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found existing default KVM network
	I0826 11:55:04.397500  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:04.397343  143039 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:74:64:16} reservation:<nil>}
	I0826 11:55:04.398446  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:04.398298  143039 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010ff20}
	I0826 11:55:04.398474  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | created network xml: 
	I0826 11:55:04.398487  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | <network>
	I0826 11:55:04.398497  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |   <name>mk-kubernetes-upgrade-117510</name>
	I0826 11:55:04.398507  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |   <dns enable='no'/>
	I0826 11:55:04.398520  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |   
	I0826 11:55:04.398530  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0826 11:55:04.398538  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |     <dhcp>
	I0826 11:55:04.398549  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0826 11:55:04.398569  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |     </dhcp>
	I0826 11:55:04.398603  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |   </ip>
	I0826 11:55:04.398621  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG |   
	I0826 11:55:04.398632  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | </network>
	I0826 11:55:04.398648  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | 
	I0826 11:55:04.404411  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | trying to create private KVM network mk-kubernetes-upgrade-117510 192.168.50.0/24...
	I0826 11:55:04.483259  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | private KVM network mk-kubernetes-upgrade-117510 192.168.50.0/24 created
	I0826 11:55:04.483309  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510 ...
	I0826 11:55:04.483331  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:04.483249  143039 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:55:04.483350  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 11:55:04.483378  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 11:55:04.745846  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:04.745711  143039 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa...
	I0826 11:55:04.878277  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:04.878102  143039 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/kubernetes-upgrade-117510.rawdisk...
	I0826 11:55:04.878315  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Writing magic tar header
	I0826 11:55:04.878330  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Writing SSH key tar header
	I0826 11:55:04.878339  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:04.878243  143039 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510 ...
	I0826 11:55:04.878410  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510
	I0826 11:55:04.878437  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510 (perms=drwx------)
	I0826 11:55:04.878449  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 11:55:04.878468  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:55:04.878476  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 11:55:04.878484  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 11:55:04.878489  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Checking permissions on dir: /home/jenkins
	I0826 11:55:04.878501  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Checking permissions on dir: /home
	I0826 11:55:04.878513  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Skipping /home - not owner
	I0826 11:55:04.878524  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 11:55:04.878542  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 11:55:04.878556  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 11:55:04.878563  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 11:55:04.878583  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 11:55:04.878624  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Creating domain...
	I0826 11:55:04.879764  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) define libvirt domain using xml: 
	I0826 11:55:04.879791  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) <domain type='kvm'>
	I0826 11:55:04.879802  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   <name>kubernetes-upgrade-117510</name>
	I0826 11:55:04.879811  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   <memory unit='MiB'>2200</memory>
	I0826 11:55:04.879819  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   <vcpu>2</vcpu>
	I0826 11:55:04.879826  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   <features>
	I0826 11:55:04.879832  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <acpi/>
	I0826 11:55:04.879837  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <apic/>
	I0826 11:55:04.879845  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <pae/>
	I0826 11:55:04.879855  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     
	I0826 11:55:04.879865  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   </features>
	I0826 11:55:04.879876  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   <cpu mode='host-passthrough'>
	I0826 11:55:04.879899  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   
	I0826 11:55:04.879918  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   </cpu>
	I0826 11:55:04.879924  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   <os>
	I0826 11:55:04.879929  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <type>hvm</type>
	I0826 11:55:04.879935  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <boot dev='cdrom'/>
	I0826 11:55:04.879943  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <boot dev='hd'/>
	I0826 11:55:04.879950  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <bootmenu enable='no'/>
	I0826 11:55:04.879956  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   </os>
	I0826 11:55:04.879962  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   <devices>
	I0826 11:55:04.879970  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <disk type='file' device='cdrom'>
	I0826 11:55:04.879979  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/boot2docker.iso'/>
	I0826 11:55:04.879990  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <target dev='hdc' bus='scsi'/>
	I0826 11:55:04.879998  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <readonly/>
	I0826 11:55:04.880005  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     </disk>
	I0826 11:55:04.880012  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <disk type='file' device='disk'>
	I0826 11:55:04.880020  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 11:55:04.880029  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/kubernetes-upgrade-117510.rawdisk'/>
	I0826 11:55:04.880037  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <target dev='hda' bus='virtio'/>
	I0826 11:55:04.880043  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     </disk>
	I0826 11:55:04.880050  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <interface type='network'>
	I0826 11:55:04.880074  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <source network='mk-kubernetes-upgrade-117510'/>
	I0826 11:55:04.880094  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <model type='virtio'/>
	I0826 11:55:04.880102  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     </interface>
	I0826 11:55:04.880110  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <interface type='network'>
	I0826 11:55:04.880116  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <source network='default'/>
	I0826 11:55:04.880122  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <model type='virtio'/>
	I0826 11:55:04.880130  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     </interface>
	I0826 11:55:04.880135  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <serial type='pty'>
	I0826 11:55:04.880144  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <target port='0'/>
	I0826 11:55:04.880149  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     </serial>
	I0826 11:55:04.880157  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <console type='pty'>
	I0826 11:55:04.880163  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <target type='serial' port='0'/>
	I0826 11:55:04.880170  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     </console>
	I0826 11:55:04.880177  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     <rng model='virtio'>
	I0826 11:55:04.880188  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)       <backend model='random'>/dev/random</backend>
	I0826 11:55:04.880196  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     </rng>
	I0826 11:55:04.880203  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     
	I0826 11:55:04.880211  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)     
	I0826 11:55:04.880216  142196 main.go:141] libmachine: (kubernetes-upgrade-117510)   </devices>
	I0826 11:55:04.880222  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) </domain>
	I0826 11:55:04.880228  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) 
	I0826 11:55:04.885031  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:d5:49:16 in network default
	I0826 11:55:04.885638  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Ensuring networks are active...
	I0826 11:55:04.885660  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:04.886442  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Ensuring network default is active
	I0826 11:55:04.886842  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Ensuring network mk-kubernetes-upgrade-117510 is active
	I0826 11:55:04.887437  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Getting domain xml...
	I0826 11:55:04.888175  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Creating domain...
	I0826 11:55:06.114093  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Waiting to get IP...
	I0826 11:55:06.114856  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:06.115343  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:06.115404  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:06.115351  143039 retry.go:31] will retry after 263.303252ms: waiting for machine to come up
	I0826 11:55:06.379876  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:06.380433  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:06.380470  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:06.380382  143039 retry.go:31] will retry after 318.030523ms: waiting for machine to come up
	I0826 11:55:06.699504  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:06.699985  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:06.700014  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:06.699929  143039 retry.go:31] will retry after 334.170816ms: waiting for machine to come up
	I0826 11:55:07.035442  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:07.035990  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:07.036012  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:07.035932  143039 retry.go:31] will retry after 494.904537ms: waiting for machine to come up
	I0826 11:55:07.532705  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:07.533190  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:07.533222  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:07.533120  143039 retry.go:31] will retry after 494.698258ms: waiting for machine to come up
	I0826 11:55:08.030151  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:08.030737  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:08.030765  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:08.030673  143039 retry.go:31] will retry after 657.237803ms: waiting for machine to come up
	I0826 11:55:08.689363  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:08.691482  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:08.691686  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:08.691622  143039 retry.go:31] will retry after 938.448034ms: waiting for machine to come up
	I0826 11:55:09.632180  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:09.632700  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:09.632734  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:09.632678  143039 retry.go:31] will retry after 1.223823216s: waiting for machine to come up
	I0826 11:55:10.858145  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:10.858634  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:10.858661  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:10.858576  143039 retry.go:31] will retry after 1.502411199s: waiting for machine to come up
	I0826 11:55:12.363426  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:12.363886  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:12.363913  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:12.363842  143039 retry.go:31] will retry after 1.512740918s: waiting for machine to come up
	I0826 11:55:13.877954  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:13.878450  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:13.878482  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:13.878382  143039 retry.go:31] will retry after 2.240722628s: waiting for machine to come up
	I0826 11:55:16.120601  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:16.121017  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:16.121047  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:16.120952  143039 retry.go:31] will retry after 2.552773874s: waiting for machine to come up
	I0826 11:55:18.675950  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:18.676490  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:18.676523  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:18.676411  143039 retry.go:31] will retry after 3.590561881s: waiting for machine to come up
	I0826 11:55:22.268117  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:22.268556  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find current IP address of domain kubernetes-upgrade-117510 in network mk-kubernetes-upgrade-117510
	I0826 11:55:22.268598  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | I0826 11:55:22.268527  143039 retry.go:31] will retry after 3.436901629s: waiting for machine to come up
	I0826 11:55:25.707920  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:25.708502  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Found IP for machine: 192.168.50.121
	I0826 11:55:25.708529  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Reserving static IP address...
	I0826 11:55:25.708543  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has current primary IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:25.709004  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-117510", mac: "52:54:00:4f:52:f8", ip: "192.168.50.121"} in network mk-kubernetes-upgrade-117510
	I0826 11:55:25.794272  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Getting to WaitForSSH function...
	I0826 11:55:25.794308  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Reserved static IP address: 192.168.50.121
	I0826 11:55:25.794323  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Waiting for SSH to be available...
	I0826 11:55:25.797406  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:25.797847  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:25.797879  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:25.798007  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Using SSH client type: external
	I0826 11:55:25.798031  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa (-rw-------)
	I0826 11:55:25.798065  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:55:25.798080  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | About to run SSH command:
	I0826 11:55:25.798095  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | exit 0
	I0826 11:55:25.927181  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | SSH cmd err, output: <nil>: 
	I0826 11:55:25.927445  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) KVM machine creation complete!
	I0826 11:55:25.927847  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetConfigRaw
	I0826 11:55:25.928433  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 11:55:25.928674  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 11:55:25.928845  142196 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 11:55:25.928860  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetState
	I0826 11:55:25.930383  142196 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 11:55:25.930398  142196 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 11:55:25.930404  142196 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 11:55:25.930410  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:25.933022  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:25.933398  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:25.933430  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:25.933574  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:25.933778  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:25.934013  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:25.934175  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:25.934343  142196 main.go:141] libmachine: Using SSH client type: native
	I0826 11:55:25.934595  142196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 11:55:25.934611  142196 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 11:55:26.042252  142196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:55:26.042291  142196 main.go:141] libmachine: Detecting the provisioner...
	I0826 11:55:26.042302  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:26.045090  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.045432  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.045461  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.045741  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:26.045987  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.046164  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.046321  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:26.046520  142196 main.go:141] libmachine: Using SSH client type: native
	I0826 11:55:26.046733  142196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 11:55:26.046746  142196 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 11:55:26.155439  142196 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 11:55:26.155555  142196 main.go:141] libmachine: found compatible host: buildroot
	I0826 11:55:26.155571  142196 main.go:141] libmachine: Provisioning with buildroot...
	I0826 11:55:26.155590  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetMachineName
	I0826 11:55:26.155938  142196 buildroot.go:166] provisioning hostname "kubernetes-upgrade-117510"
	I0826 11:55:26.155969  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetMachineName
	I0826 11:55:26.156223  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:26.159056  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.159446  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.159473  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.159627  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:26.159864  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.160006  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.160177  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:26.160337  142196 main.go:141] libmachine: Using SSH client type: native
	I0826 11:55:26.160561  142196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 11:55:26.160581  142196 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-117510 && echo "kubernetes-upgrade-117510" | sudo tee /etc/hostname
	I0826 11:55:26.281020  142196 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-117510
	
	I0826 11:55:26.281053  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:26.283983  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.284292  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.284325  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.284508  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:26.284779  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.284969  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.285106  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:26.285309  142196 main.go:141] libmachine: Using SSH client type: native
	I0826 11:55:26.285645  142196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 11:55:26.285669  142196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-117510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-117510/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-117510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:55:26.400601  142196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:55:26.400650  142196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:55:26.400678  142196 buildroot.go:174] setting up certificates
	I0826 11:55:26.400696  142196 provision.go:84] configureAuth start
	I0826 11:55:26.400715  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetMachineName
	I0826 11:55:26.401073  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetIP
	I0826 11:55:26.403848  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.404193  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.404230  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.404368  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:26.406886  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.407198  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.407233  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.407363  142196 provision.go:143] copyHostCerts
	I0826 11:55:26.407431  142196 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:55:26.407450  142196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:55:26.407516  142196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:55:26.407639  142196 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:55:26.407648  142196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:55:26.407670  142196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:55:26.407733  142196 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:55:26.407741  142196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:55:26.407758  142196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:55:26.407807  142196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-117510 san=[127.0.0.1 192.168.50.121 kubernetes-upgrade-117510 localhost minikube]
	I0826 11:55:26.500306  142196 provision.go:177] copyRemoteCerts
	I0826 11:55:26.500374  142196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:55:26.500400  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:26.503397  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.503720  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.503754  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.503953  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:26.504130  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.504314  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:26.504511  142196 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa Username:docker}
	I0826 11:55:26.589962  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:55:26.615158  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0826 11:55:26.639773  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 11:55:26.663674  142196 provision.go:87] duration metric: took 262.958424ms to configureAuth
	I0826 11:55:26.663709  142196 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:55:26.663898  142196 config.go:182] Loaded profile config "kubernetes-upgrade-117510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 11:55:26.663978  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:26.666949  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.667395  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.667426  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.667643  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:26.667878  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.668065  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.668211  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:26.668380  142196 main.go:141] libmachine: Using SSH client type: native
	I0826 11:55:26.668618  142196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 11:55:26.668636  142196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:55:26.933951  142196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:55:26.933980  142196 main.go:141] libmachine: Checking connection to Docker...
	I0826 11:55:26.933988  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetURL
	I0826 11:55:26.935415  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | Using libvirt version 6000000
	I0826 11:55:26.937776  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.938185  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.938209  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.938396  142196 main.go:141] libmachine: Docker is up and running!
	I0826 11:55:26.938412  142196 main.go:141] libmachine: Reticulating splines...
	I0826 11:55:26.938424  142196 client.go:171] duration metric: took 22.544620006s to LocalClient.Create
	I0826 11:55:26.938452  142196 start.go:167] duration metric: took 22.544695517s to libmachine.API.Create "kubernetes-upgrade-117510"
	I0826 11:55:26.938465  142196 start.go:293] postStartSetup for "kubernetes-upgrade-117510" (driver="kvm2")
	I0826 11:55:26.938479  142196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:55:26.938502  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 11:55:26.938772  142196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:55:26.938796  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:26.941173  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.941500  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:26.941542  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:26.941674  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:26.941863  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:26.942035  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:26.942158  142196 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa Username:docker}
	I0826 11:55:27.024971  142196 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:55:27.028995  142196 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:55:27.029029  142196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:55:27.029103  142196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:55:27.029184  142196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:55:27.029276  142196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:55:27.039050  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:55:27.063118  142196 start.go:296] duration metric: took 124.635316ms for postStartSetup
	I0826 11:55:27.063183  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetConfigRaw
	I0826 11:55:27.063892  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetIP
	I0826 11:55:27.066338  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.066643  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:27.066672  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.066923  142196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/config.json ...
	I0826 11:55:27.067172  142196 start.go:128] duration metric: took 22.699316734s to createHost
	I0826 11:55:27.067204  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:27.069844  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.070240  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:27.070267  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.070415  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:27.070621  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:27.070814  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:27.070977  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:27.071152  142196 main.go:141] libmachine: Using SSH client type: native
	I0826 11:55:27.071372  142196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 11:55:27.071389  142196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:55:27.179423  142196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724673327.140678804
	
	I0826 11:55:27.179449  142196 fix.go:216] guest clock: 1724673327.140678804
	I0826 11:55:27.179459  142196 fix.go:229] Guest: 2024-08-26 11:55:27.140678804 +0000 UTC Remote: 2024-08-26 11:55:27.067190225 +0000 UTC m=+79.946554630 (delta=73.488579ms)
	I0826 11:55:27.179487  142196 fix.go:200] guest clock delta is within tolerance: 73.488579ms
	I0826 11:55:27.179494  142196 start.go:83] releasing machines lock for "kubernetes-upgrade-117510", held for 22.81183059s
	I0826 11:55:27.179524  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 11:55:27.179849  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetIP
	I0826 11:55:27.182852  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.183258  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:27.183292  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.183439  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 11:55:27.184082  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 11:55:27.184297  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 11:55:27.184383  142196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:55:27.184441  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:27.184679  142196 ssh_runner.go:195] Run: cat /version.json
	I0826 11:55:27.184704  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 11:55:27.187483  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.187626  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.187841  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:27.187891  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.187931  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:27.187952  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:27.188109  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:27.188218  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 11:55:27.188303  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:27.188423  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 11:55:27.188497  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:27.188577  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 11:55:27.188656  142196 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa Username:docker}
	I0826 11:55:27.188762  142196 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa Username:docker}
	I0826 11:55:27.268467  142196 ssh_runner.go:195] Run: systemctl --version
	I0826 11:55:27.316005  142196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:55:27.485034  142196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:55:27.491724  142196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:55:27.491794  142196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:55:27.508191  142196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:55:27.508217  142196 start.go:495] detecting cgroup driver to use...
	I0826 11:55:27.508276  142196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:55:27.525118  142196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:55:27.539667  142196 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:55:27.539738  142196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:55:27.555648  142196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:55:27.569900  142196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:55:27.711592  142196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:55:27.850663  142196 docker.go:233] disabling docker service ...
	I0826 11:55:27.850727  142196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:55:27.865264  142196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:55:27.878210  142196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:55:28.006913  142196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:55:28.118706  142196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:55:28.133061  142196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:55:28.154360  142196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 11:55:28.154434  142196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:55:28.166771  142196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:55:28.166869  142196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:55:28.179019  142196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:55:28.190870  142196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:55:28.201542  142196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:55:28.212277  142196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:55:28.221234  142196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:55:28.221293  142196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:55:28.233517  142196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:55:28.245272  142196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:55:28.369249  142196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:55:28.519688  142196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:55:28.519763  142196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:55:28.524533  142196 start.go:563] Will wait 60s for crictl version
	I0826 11:55:28.524622  142196 ssh_runner.go:195] Run: which crictl
	I0826 11:55:28.528828  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:55:28.571295  142196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:55:28.571395  142196 ssh_runner.go:195] Run: crio --version
	I0826 11:55:28.600266  142196 ssh_runner.go:195] Run: crio --version
	I0826 11:55:28.634917  142196 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 11:55:28.636577  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetIP
	I0826 11:55:28.639773  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:28.640271  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:55:18 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 11:55:28.640295  142196 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 11:55:28.640599  142196 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0826 11:55:28.644827  142196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:55:28.657988  142196 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-117510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:55:28.658165  142196 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 11:55:28.658232  142196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:55:28.697118  142196 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 11:55:28.697206  142196 ssh_runner.go:195] Run: which lz4
	I0826 11:55:28.701273  142196 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 11:55:28.705656  142196 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 11:55:28.705688  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 11:55:30.300926  142196 crio.go:462] duration metric: took 1.599712115s to copy over tarball
	I0826 11:55:30.301012  142196 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 11:55:33.144186  142196 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.84313257s)
	I0826 11:55:33.144227  142196 crio.go:469] duration metric: took 2.84326705s to extract the tarball
	I0826 11:55:33.144239  142196 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 11:55:33.188600  142196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:55:33.236054  142196 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 11:55:33.236086  142196 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 11:55:33.236202  142196 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:55:33.236227  142196 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:55:33.236261  142196 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:55:33.236305  142196 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 11:55:33.236319  142196 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 11:55:33.236392  142196 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:55:33.236502  142196 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:55:33.236518  142196 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 11:55:33.237730  142196 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 11:55:33.237741  142196 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:55:33.237731  142196 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 11:55:33.237772  142196 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:55:33.237742  142196 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:55:33.237787  142196 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:55:33.237798  142196 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:55:33.237916  142196 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 11:55:33.483823  142196 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 11:55:33.507679  142196 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:55:33.515678  142196 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:55:33.529017  142196 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:55:33.530980  142196 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 11:55:33.539181  142196 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 11:55:33.539235  142196 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 11:55:33.539295  142196 ssh_runner.go:195] Run: which crictl
	I0826 11:55:33.547793  142196 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 11:55:33.558532  142196 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:55:33.644103  142196 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 11:55:33.644165  142196 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:55:33.644223  142196 ssh_runner.go:195] Run: which crictl
	I0826 11:55:33.677760  142196 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 11:55:33.677806  142196 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:55:33.677860  142196 ssh_runner.go:195] Run: which crictl
	I0826 11:55:33.692122  142196 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 11:55:33.692173  142196 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:55:33.692228  142196 ssh_runner.go:195] Run: which crictl
	I0826 11:55:33.707433  142196 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 11:55:33.707571  142196 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 11:55:33.707476  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 11:55:33.707643  142196 ssh_runner.go:195] Run: which crictl
	I0826 11:55:33.709796  142196 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 11:55:33.709846  142196 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 11:55:33.709893  142196 ssh_runner.go:195] Run: which crictl
	I0826 11:55:33.709916  142196 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 11:55:33.709942  142196 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:55:33.709977  142196 ssh_runner.go:195] Run: which crictl
	I0826 11:55:33.710021  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:55:33.710046  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:55:33.710170  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:55:33.722953  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 11:55:33.819944  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 11:55:33.820014  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:55:33.820044  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:55:33.819948  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 11:55:33.820002  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:55:33.831605  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:55:33.855055  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 11:55:33.988940  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 11:55:33.988963  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:55:33.989001  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:55:34.008681  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:55:34.008734  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 11:55:34.008800  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:55:34.008875  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 11:55:34.107414  142196 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:55:34.147782  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:55:34.147873  142196 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 11:55:34.147875  142196 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 11:55:34.170825  142196 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 11:55:34.170913  142196 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 11:55:34.170992  142196 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 11:55:34.171064  142196 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 11:55:34.322418  142196 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 11:55:34.322489  142196 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 11:55:34.322559  142196 cache_images.go:92] duration metric: took 1.08642911s to LoadCachedImages
	W0826 11:55:34.322636  142196 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0826 11:55:34.322654  142196 kubeadm.go:934] updating node { 192.168.50.121 8443 v1.20.0 crio true true} ...
	I0826 11:55:34.322787  142196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-117510 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:55:34.322881  142196 ssh_runner.go:195] Run: crio config
	I0826 11:55:34.368796  142196 cni.go:84] Creating CNI manager for ""
	I0826 11:55:34.368822  142196 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 11:55:34.368840  142196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:55:34.368870  142196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.121 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-117510 NodeName:kubernetes-upgrade-117510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 11:55:34.369049  142196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-117510"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:55:34.369132  142196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 11:55:34.379984  142196 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:55:34.380057  142196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 11:55:34.390087  142196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0826 11:55:34.408793  142196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:55:34.427510  142196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0826 11:55:34.446280  142196 ssh_runner.go:195] Run: grep 192.168.50.121	control-plane.minikube.internal$ /etc/hosts
	I0826 11:55:34.450301  142196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:55:34.462623  142196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:55:34.584026  142196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:55:34.607554  142196 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510 for IP: 192.168.50.121
	I0826 11:55:34.607580  142196 certs.go:194] generating shared ca certs ...
	I0826 11:55:34.607604  142196 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:55:34.607781  142196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:55:34.607840  142196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:55:34.607866  142196 certs.go:256] generating profile certs ...
	I0826 11:55:34.607950  142196 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/client.key
	I0826 11:55:34.607969  142196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/client.crt with IP's: []
	I0826 11:55:34.893779  142196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/client.crt ...
	I0826 11:55:34.893817  142196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/client.crt: {Name:mk4f9ef66b9485e1315822c942a00d53f10d7b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:55:34.893999  142196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/client.key ...
	I0826 11:55:34.894015  142196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/client.key: {Name:mk557ca65f7850af7ec7230d7da39a8fce848827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:55:34.894097  142196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.key.794b295d
	I0826 11:55:34.894114  142196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.crt.794b295d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.121]
	I0826 11:55:35.090477  142196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.crt.794b295d ...
	I0826 11:55:35.090509  142196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.crt.794b295d: {Name:mk58c6f2de2bf83a200a5711e29a7d7d7997c4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:55:35.128459  142196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.key.794b295d ...
	I0826 11:55:35.128500  142196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.key.794b295d: {Name:mk573a44cd96b7501f418e0a36aeb8f3ffe8b445 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:55:35.129099  142196 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.crt.794b295d -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.crt
	I0826 11:55:35.129208  142196 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.key.794b295d -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.key
	I0826 11:55:35.129280  142196 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.key
	I0826 11:55:35.129305  142196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.crt with IP's: []
	I0826 11:55:35.370941  142196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.crt ...
	I0826 11:55:35.370976  142196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.crt: {Name:mk25d02f5ca80a54fdbab096d655b592f25db04e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:55:35.371238  142196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.key ...
	I0826 11:55:35.371266  142196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.key: {Name:mkc6477acfbe78b17210a08dc7bf68f1a2c19bfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:55:35.371498  142196 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:55:35.371557  142196 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:55:35.371573  142196 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:55:35.371602  142196 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:55:35.371632  142196 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:55:35.371665  142196 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:55:35.371725  142196 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:55:35.372504  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:55:35.399384  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:55:35.423881  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:55:35.450678  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:55:35.475688  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0826 11:55:35.508762  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 11:55:35.563105  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:55:35.599510  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:55:35.631952  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:55:35.662717  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:55:35.690550  142196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:55:35.718035  142196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:55:35.741025  142196 ssh_runner.go:195] Run: openssl version
	I0826 11:55:35.750323  142196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:55:35.767265  142196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:55:35.772751  142196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:55:35.772827  142196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:55:35.780112  142196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:55:35.794799  142196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:55:35.808088  142196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:55:35.813339  142196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:55:35.813427  142196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:55:35.820077  142196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:55:35.833701  142196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:55:35.854278  142196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:55:35.860491  142196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:55:35.860599  142196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:55:35.868989  142196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:55:35.885014  142196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:55:35.890960  142196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 11:55:35.891032  142196 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-117510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:55:35.891134  142196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:55:35.891202  142196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:55:35.934456  142196 cri.go:89] found id: ""
	I0826 11:55:35.934635  142196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 11:55:35.944999  142196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 11:55:35.955324  142196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 11:55:35.966354  142196 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 11:55:35.966380  142196 kubeadm.go:157] found existing configuration files:
	
	I0826 11:55:35.966440  142196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 11:55:35.981081  142196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 11:55:35.981177  142196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 11:55:35.995962  142196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 11:55:36.006814  142196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 11:55:36.006931  142196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 11:55:36.022146  142196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 11:55:36.033380  142196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 11:55:36.033462  142196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 11:55:36.044028  142196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 11:55:36.055093  142196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 11:55:36.055174  142196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 11:55:36.068552  142196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 11:55:36.227730  142196 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 11:55:36.228342  142196 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 11:55:36.407294  142196 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 11:55:36.407416  142196 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 11:55:36.407535  142196 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 11:55:36.609676  142196 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 11:55:36.612076  142196 out.go:235]   - Generating certificates and keys ...
	I0826 11:55:36.612211  142196 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 11:55:36.612368  142196 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 11:55:36.734682  142196 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0826 11:55:36.907060  142196 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0826 11:55:37.138381  142196 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0826 11:55:37.437920  142196 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0826 11:55:37.731177  142196 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0826 11:55:37.731414  142196 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-117510 localhost] and IPs [192.168.50.121 127.0.0.1 ::1]
	I0826 11:55:38.105507  142196 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0826 11:55:38.105712  142196 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-117510 localhost] and IPs [192.168.50.121 127.0.0.1 ::1]
	I0826 11:55:38.243615  142196 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0826 11:55:38.449634  142196 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0826 11:55:38.591287  142196 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0826 11:55:38.591427  142196 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 11:55:38.665960  142196 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 11:55:39.012252  142196 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 11:55:39.195997  142196 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 11:55:39.318372  142196 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 11:55:39.343767  142196 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 11:55:39.344389  142196 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 11:55:39.344444  142196 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 11:55:39.486313  142196 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 11:55:39.488068  142196 out.go:235]   - Booting up control plane ...
	I0826 11:55:39.488222  142196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 11:55:39.493088  142196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 11:55:39.496604  142196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 11:55:39.497722  142196 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 11:55:39.503113  142196 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 11:56:19.473419  142196 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 11:56:19.473998  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:56:19.474268  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:56:24.473062  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:56:24.473350  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:56:34.472146  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:56:34.472357  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:56:54.472508  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:56:54.472760  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:57:34.471041  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:57:34.471291  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:57:34.471311  142196 kubeadm.go:310] 
	I0826 11:57:34.471361  142196 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 11:57:34.471397  142196 kubeadm.go:310] 		timed out waiting for the condition
	I0826 11:57:34.471407  142196 kubeadm.go:310] 
	I0826 11:57:34.471476  142196 kubeadm.go:310] 	This error is likely caused by:
	I0826 11:57:34.471542  142196 kubeadm.go:310] 		- The kubelet is not running
	I0826 11:57:34.471701  142196 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 11:57:34.471719  142196 kubeadm.go:310] 
	I0826 11:57:34.471858  142196 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 11:57:34.471913  142196 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 11:57:34.471945  142196 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 11:57:34.471956  142196 kubeadm.go:310] 
	I0826 11:57:34.472075  142196 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 11:57:34.472196  142196 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 11:57:34.472222  142196 kubeadm.go:310] 
	I0826 11:57:34.472365  142196 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 11:57:34.472476  142196 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 11:57:34.472543  142196 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 11:57:34.472603  142196 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 11:57:34.472611  142196 kubeadm.go:310] 
	I0826 11:57:34.473103  142196 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 11:57:34.473201  142196 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 11:57:34.473304  142196 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 11:57:34.473503  142196 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-117510 localhost] and IPs [192.168.50.121 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-117510 localhost] and IPs [192.168.50.121 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-117510 localhost] and IPs [192.168.50.121 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-117510 localhost] and IPs [192.168.50.121 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 11:57:34.473553  142196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 11:57:35.608228  142196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.13464168s)
	I0826 11:57:35.608312  142196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:57:35.622399  142196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 11:57:35.632692  142196 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 11:57:35.632730  142196 kubeadm.go:157] found existing configuration files:
	
	I0826 11:57:35.632786  142196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 11:57:35.642534  142196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 11:57:35.642620  142196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 11:57:35.652448  142196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 11:57:35.661883  142196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 11:57:35.661956  142196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 11:57:35.671884  142196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 11:57:35.680978  142196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 11:57:35.681050  142196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 11:57:35.690509  142196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 11:57:35.699808  142196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 11:57:35.699877  142196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 11:57:35.709637  142196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 11:57:35.777599  142196 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 11:57:35.777713  142196 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 11:57:35.922387  142196 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 11:57:35.922563  142196 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 11:57:35.922711  142196 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 11:57:36.102925  142196 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 11:57:36.105097  142196 out.go:235]   - Generating certificates and keys ...
	I0826 11:57:36.105209  142196 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 11:57:36.105284  142196 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 11:57:36.105387  142196 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 11:57:36.105484  142196 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 11:57:36.105595  142196 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 11:57:36.105707  142196 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 11:57:36.105790  142196 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 11:57:36.105884  142196 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 11:57:36.105992  142196 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 11:57:36.106092  142196 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 11:57:36.106163  142196 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 11:57:36.106243  142196 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 11:57:36.408841  142196 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 11:57:36.621145  142196 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 11:57:36.726184  142196 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 11:57:36.802353  142196 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 11:57:36.816821  142196 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 11:57:36.817285  142196 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 11:57:36.817376  142196 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 11:57:36.958909  142196 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 11:57:36.961269  142196 out.go:235]   - Booting up control plane ...
	I0826 11:57:36.961378  142196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 11:57:36.964800  142196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 11:57:36.974076  142196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 11:57:36.975004  142196 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 11:57:36.978299  142196 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 11:58:16.981020  142196 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 11:58:16.981166  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:58:16.981436  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:58:21.981464  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:58:21.981806  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:58:31.982225  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:58:31.982525  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:58:51.983834  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:58:51.984087  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:59:31.984431  142196 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 11:59:31.984660  142196 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 11:59:31.984672  142196 kubeadm.go:310] 
	I0826 11:59:31.984749  142196 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 11:59:31.984822  142196 kubeadm.go:310] 		timed out waiting for the condition
	I0826 11:59:31.984833  142196 kubeadm.go:310] 
	I0826 11:59:31.984878  142196 kubeadm.go:310] 	This error is likely caused by:
	I0826 11:59:31.984924  142196 kubeadm.go:310] 		- The kubelet is not running
	I0826 11:59:31.985094  142196 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 11:59:31.985143  142196 kubeadm.go:310] 
	I0826 11:59:31.985336  142196 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 11:59:31.985398  142196 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 11:59:31.985459  142196 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 11:59:31.985466  142196 kubeadm.go:310] 
	I0826 11:59:31.985597  142196 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 11:59:31.985715  142196 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 11:59:31.985729  142196 kubeadm.go:310] 
	I0826 11:59:31.985891  142196 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 11:59:31.986008  142196 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 11:59:31.986105  142196 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 11:59:31.986225  142196 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 11:59:31.986259  142196 kubeadm.go:310] 
	I0826 11:59:31.986408  142196 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 11:59:31.986538  142196 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 11:59:31.986660  142196 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 11:59:31.986727  142196 kubeadm.go:394] duration metric: took 3m56.095702329s to StartCluster
	I0826 11:59:31.986785  142196 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 11:59:31.986866  142196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 11:59:32.037822  142196 cri.go:89] found id: ""
	I0826 11:59:32.037868  142196 logs.go:276] 0 containers: []
	W0826 11:59:32.037881  142196 logs.go:278] No container was found matching "kube-apiserver"
	I0826 11:59:32.037890  142196 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 11:59:32.037955  142196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 11:59:32.079996  142196 cri.go:89] found id: ""
	I0826 11:59:32.080028  142196 logs.go:276] 0 containers: []
	W0826 11:59:32.080038  142196 logs.go:278] No container was found matching "etcd"
	I0826 11:59:32.080047  142196 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 11:59:32.080115  142196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 11:59:32.116418  142196 cri.go:89] found id: ""
	I0826 11:59:32.116452  142196 logs.go:276] 0 containers: []
	W0826 11:59:32.116464  142196 logs.go:278] No container was found matching "coredns"
	I0826 11:59:32.116471  142196 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 11:59:32.116539  142196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 11:59:32.152100  142196 cri.go:89] found id: ""
	I0826 11:59:32.152138  142196 logs.go:276] 0 containers: []
	W0826 11:59:32.152151  142196 logs.go:278] No container was found matching "kube-scheduler"
	I0826 11:59:32.152162  142196 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 11:59:32.152232  142196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 11:59:32.186573  142196 cri.go:89] found id: ""
	I0826 11:59:32.186601  142196 logs.go:276] 0 containers: []
	W0826 11:59:32.186609  142196 logs.go:278] No container was found matching "kube-proxy"
	I0826 11:59:32.186616  142196 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 11:59:32.186693  142196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 11:59:32.223440  142196 cri.go:89] found id: ""
	I0826 11:59:32.223477  142196 logs.go:276] 0 containers: []
	W0826 11:59:32.223485  142196 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 11:59:32.223492  142196 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 11:59:32.223548  142196 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 11:59:32.259772  142196 cri.go:89] found id: ""
	I0826 11:59:32.259808  142196 logs.go:276] 0 containers: []
	W0826 11:59:32.259821  142196 logs.go:278] No container was found matching "kindnet"
	I0826 11:59:32.259834  142196 logs.go:123] Gathering logs for kubelet ...
	I0826 11:59:32.259852  142196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 11:59:32.313674  142196 logs.go:123] Gathering logs for dmesg ...
	I0826 11:59:32.313725  142196 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 11:59:32.328002  142196 logs.go:123] Gathering logs for describe nodes ...
	I0826 11:59:32.328033  142196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 11:59:32.447382  142196 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 11:59:32.447411  142196 logs.go:123] Gathering logs for CRI-O ...
	I0826 11:59:32.447430  142196 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 11:59:32.557677  142196 logs.go:123] Gathering logs for container status ...
	I0826 11:59:32.557724  142196 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0826 11:59:32.596732  142196 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 11:59:32.596794  142196 out.go:270] * 
	* 
	W0826 11:59:32.596868  142196 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 11:59:32.596887  142196 out.go:270] * 
	* 
	W0826 11:59:32.597842  142196 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 11:59:32.600864  142196 out.go:201] 
	W0826 11:59:32.602048  142196 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 11:59:32.602105  142196 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 11:59:32.602128  142196 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 11:59:32.603627  142196 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-117510
E0826 11:59:34.326742  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-117510: (2.298977262s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-117510 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-117510 status --format={{.Host}}: exit status 7 (75.370134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.509681054s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-117510 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.778071ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-117510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-117510
	    minikube start -p kubernetes-upgrade-117510 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1175102 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-117510 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-117510 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (14.417321294s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-26 12:00:31.147829239 +0000 UTC m=+4436.473494957
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-117510 -n kubernetes-upgrade-117510
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-117510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-117510 logs -n 25: (1.632162694s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo find                           | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo crio                           | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-814705                                     | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	| start   | -p pause-585941 --memory=2048                        | pause-585941              | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:59 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-373568 ssh                              | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-373568 -- sudo                       | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-373568                               | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	| start   | -p old-k8s-version-839656                            | old-k8s-version-839656    | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC | 26 Aug 24 11:59 UTC |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-585941                                      | pause-585941              | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:00:16
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:00:16.782884  149560 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:00:16.783175  149560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:00:16.783186  149560 out.go:358] Setting ErrFile to fd 2...
	I0826 12:00:16.783193  149560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:00:16.783496  149560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:00:16.784203  149560 out.go:352] Setting JSON to false
	I0826 12:00:16.785299  149560 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6158,"bootTime":1724667459,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:00:16.785371  149560 start.go:139] virtualization: kvm guest
	I0826 12:00:16.787203  149560 out.go:177] * [kubernetes-upgrade-117510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:00:16.789104  149560 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:00:16.789108  149560 notify.go:220] Checking for updates...
	I0826 12:00:16.790640  149560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:00:16.792366  149560 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:00:16.793883  149560 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:00:16.795344  149560 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:00:16.796734  149560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:00:16.798390  149560 config.go:182] Loaded profile config "kubernetes-upgrade-117510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:00:16.798931  149560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:00:16.799003  149560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:00:16.816087  149560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0826 12:00:16.816714  149560 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:00:16.817436  149560 main.go:141] libmachine: Using API Version  1
	I0826 12:00:16.817496  149560 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:00:16.817946  149560 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:00:16.818177  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:16.818523  149560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:00:16.819013  149560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:00:16.819064  149560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:00:16.837105  149560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
	I0826 12:00:16.837558  149560 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:00:16.838083  149560 main.go:141] libmachine: Using API Version  1
	I0826 12:00:16.838111  149560 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:00:16.838472  149560 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:00:16.838733  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:16.881948  149560 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:00:16.883282  149560 start.go:297] selected driver: kvm2
	I0826 12:00:16.883316  149560 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-117510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:00:16.883451  149560 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:00:16.884398  149560 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:16.884525  149560 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:00:16.907684  149560 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:00:16.908271  149560 cni.go:84] Creating CNI manager for ""
	I0826 12:00:16.908292  149560 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:00:16.908366  149560 start.go:340] cluster config:
	{Name:kubernetes-upgrade-117510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-117510 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:00:16.908516  149560 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:16.910979  149560 out.go:177] * Starting "kubernetes-upgrade-117510" primary control-plane node in "kubernetes-upgrade-117510" cluster
	I0826 12:00:16.912557  149560 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:00:16.912610  149560 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:00:16.912621  149560 cache.go:56] Caching tarball of preloaded images
	I0826 12:00:16.912737  149560 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:00:16.912753  149560 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:00:16.912886  149560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/config.json ...
	I0826 12:00:16.913156  149560 start.go:360] acquireMachinesLock for kubernetes-upgrade-117510: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:00:16.913217  149560 start.go:364] duration metric: took 30.769µs to acquireMachinesLock for "kubernetes-upgrade-117510"
	I0826 12:00:16.913237  149560 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:00:16.913244  149560 fix.go:54] fixHost starting: 
	I0826 12:00:16.913629  149560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:00:16.913675  149560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:00:16.930711  149560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46683
	I0826 12:00:16.931379  149560 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:00:16.931947  149560 main.go:141] libmachine: Using API Version  1
	I0826 12:00:16.931973  149560 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:00:16.932376  149560 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:00:16.932599  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:16.932792  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetState
	I0826 12:00:16.934653  149560 fix.go:112] recreateIfNeeded on kubernetes-upgrade-117510: state=Running err=<nil>
	W0826 12:00:16.934674  149560 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:00:16.936831  149560 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-117510" VM ...
	I0826 12:00:14.314931  149261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:00:14.815290  149261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:00:14.831376  149261 api_server.go:72] duration metric: took 1.017012035s to wait for apiserver process to appear ...
	I0826 12:00:14.831409  149261 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:00:14.831438  149261 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0826 12:00:17.871763  149261 api_server.go:279] https://192.168.39.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:00:17.871810  149261 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:00:17.871828  149261 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0826 12:00:17.883796  149261 api_server.go:279] https://192.168.39.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:00:17.883834  149261 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:00:18.332029  149261 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0826 12:00:18.336905  149261 api_server.go:279] https://192.168.39.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:00:18.336939  149261 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:00:18.832109  149261 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0826 12:00:18.840711  149261 api_server.go:279] https://192.168.39.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:00:18.840752  149261 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:00:19.331604  149261 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0826 12:00:19.351329  149261 api_server.go:279] https://192.168.39.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:00:19.351367  149261 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:00:19.831874  149261 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0826 12:00:19.836355  149261 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0826 12:00:19.843825  149261 api_server.go:141] control plane version: v1.31.0
	I0826 12:00:19.843865  149261 api_server.go:131] duration metric: took 5.012448745s to wait for apiserver health ...
	I0826 12:00:19.843890  149261 cni.go:84] Creating CNI manager for ""
	I0826 12:00:19.843899  149261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:00:19.845968  149261 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:00:16.938054  149560 machine.go:93] provisionDockerMachine start ...
	I0826 12:00:16.938078  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:16.938350  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:16.941021  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:16.941506  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:16.941541  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:16.941707  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:16.941919  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:16.942102  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:16.942276  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:16.942451  149560 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:16.942701  149560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 12:00:16.942714  149560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:00:17.059713  149560 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-117510
	
	I0826 12:00:17.059758  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetMachineName
	I0826 12:00:17.060071  149560 buildroot.go:166] provisioning hostname "kubernetes-upgrade-117510"
	I0826 12:00:17.060099  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetMachineName
	I0826 12:00:17.060277  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:17.063328  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.063743  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:17.063771  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.064014  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:17.064231  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:17.064433  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:17.064594  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:17.064773  149560 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:17.064964  149560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 12:00:17.064977  149560 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-117510 && echo "kubernetes-upgrade-117510" | sudo tee /etc/hostname
	I0826 12:00:17.204215  149560 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-117510
	
	I0826 12:00:17.204261  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:17.207227  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.207621  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:17.207659  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.207901  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:17.208137  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:17.208341  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:17.208520  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:17.208704  149560 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:17.208910  149560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 12:00:17.208938  149560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-117510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-117510/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-117510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:00:17.328717  149560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:00:17.328763  149560 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:00:17.328795  149560 buildroot.go:174] setting up certificates
	I0826 12:00:17.328807  149560 provision.go:84] configureAuth start
	I0826 12:00:17.328825  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetMachineName
	I0826 12:00:17.329124  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetIP
	I0826 12:00:17.332260  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.332592  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:17.332645  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.332912  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:17.335282  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.335697  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:17.335732  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.335945  149560 provision.go:143] copyHostCerts
	I0826 12:00:17.336019  149560 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:00:17.336041  149560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:00:17.336111  149560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:00:17.336221  149560 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:00:17.336231  149560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:00:17.336264  149560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:00:17.336336  149560 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:00:17.336346  149560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:00:17.336374  149560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:00:17.336437  149560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-117510 san=[127.0.0.1 192.168.50.121 kubernetes-upgrade-117510 localhost minikube]
	I0826 12:00:17.630014  149560 provision.go:177] copyRemoteCerts
	I0826 12:00:17.630081  149560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:00:17.630108  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:17.633181  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.633701  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:17.633737  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.633925  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:17.634190  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:17.634392  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:17.634702  149560 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa Username:docker}
	I0826 12:00:17.729233  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:00:17.763635  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0826 12:00:17.799867  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:00:17.826897  149560 provision.go:87] duration metric: took 498.069603ms to configureAuth
	I0826 12:00:17.826936  149560 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:00:17.827113  149560 config.go:182] Loaded profile config "kubernetes-upgrade-117510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:00:17.827186  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:17.830707  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.831122  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:17.831173  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:17.831375  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:17.831673  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:17.831878  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:17.832069  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:17.832284  149560 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:17.832546  149560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 12:00:17.832579  149560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:00:18.719877  149560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:00:18.719916  149560 machine.go:96] duration metric: took 1.781842604s to provisionDockerMachine
	I0826 12:00:18.719932  149560 start.go:293] postStartSetup for "kubernetes-upgrade-117510" (driver="kvm2")
	I0826 12:00:18.719949  149560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:00:18.719982  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:18.720359  149560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:00:18.720395  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:18.722945  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:18.723272  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:18.723298  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:18.723473  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:18.723665  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:18.723852  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:18.723996  149560 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa Username:docker}
	I0826 12:00:18.845944  149560 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:00:18.854858  149560 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:00:18.854898  149560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:00:18.855025  149560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:00:18.855137  149560 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:00:18.855267  149560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:00:18.934530  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:00:19.021294  149560 start.go:296] duration metric: took 301.337249ms for postStartSetup
	I0826 12:00:19.021357  149560 fix.go:56] duration metric: took 2.108112429s for fixHost
	I0826 12:00:19.021400  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:19.024869  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:19.025391  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:19.025436  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:19.025778  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:19.025982  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:19.026172  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:19.026368  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:19.026542  149560 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:19.026768  149560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.121 22 <nil> <nil>}
	I0826 12:00:19.026785  149560 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:00:19.240223  149560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724673619.224300803
	
	I0826 12:00:19.240263  149560 fix.go:216] guest clock: 1724673619.224300803
	I0826 12:00:19.240275  149560 fix.go:229] Guest: 2024-08-26 12:00:19.224300803 +0000 UTC Remote: 2024-08-26 12:00:19.021364014 +0000 UTC m=+2.287170357 (delta=202.936789ms)
	I0826 12:00:19.240310  149560 fix.go:200] guest clock delta is within tolerance: 202.936789ms
	I0826 12:00:19.240319  149560 start.go:83] releasing machines lock for "kubernetes-upgrade-117510", held for 2.327089598s
	I0826 12:00:19.240351  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:19.240700  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetIP
	I0826 12:00:19.243960  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:19.244486  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:19.244552  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:19.244668  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:19.245303  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:19.245522  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .DriverName
	I0826 12:00:19.245630  149560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:00:19.245691  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:19.245817  149560 ssh_runner.go:195] Run: cat /version.json
	I0826 12:00:19.245839  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHHostname
	I0826 12:00:19.249632  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:19.250115  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:19.250146  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:19.250431  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:19.250546  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:19.250616  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:19.250811  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:19.251015  149560 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa Username:docker}
	I0826 12:00:19.251541  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:19.251572  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:19.251826  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHPort
	I0826 12:00:19.252073  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHKeyPath
	I0826 12:00:19.252253  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetSSHUsername
	I0826 12:00:19.252415  149560 sshutil.go:53] new ssh client: &{IP:192.168.50.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/kubernetes-upgrade-117510/id_rsa Username:docker}
	I0826 12:00:19.474389  149560 ssh_runner.go:195] Run: systemctl --version
	I0826 12:00:19.480893  149560 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:00:19.652203  149560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:00:19.658201  149560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:00:19.658307  149560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:00:19.669139  149560 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0826 12:00:19.669179  149560 start.go:495] detecting cgroup driver to use...
	I0826 12:00:19.669258  149560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:00:19.693239  149560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:00:19.708823  149560 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:00:19.708917  149560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:00:19.724011  149560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:00:19.739252  149560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:00:19.961985  149560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:00:20.151442  149560 docker.go:233] disabling docker service ...
	I0826 12:00:20.151527  149560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:00:20.173717  149560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:00:20.189290  149560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:00:20.348070  149560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:00:20.510856  149560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:00:20.525847  149560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:00:20.549787  149560 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:00:20.549862  149560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:20.570051  149560 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:00:20.570139  149560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:20.589954  149560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:20.603387  149560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:20.617974  149560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:00:20.630702  149560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:20.643700  149560 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:20.656812  149560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:20.669461  149560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:00:20.680175  149560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:00:20.690553  149560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:00:20.863863  149560 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:00:21.250473  149560 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:00:21.250561  149560 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:00:21.260386  149560 start.go:563] Will wait 60s for crictl version
	I0826 12:00:21.260463  149560 ssh_runner.go:195] Run: which crictl
	I0826 12:00:21.265187  149560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:00:21.306709  149560 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:00:21.306805  149560 ssh_runner.go:195] Run: crio --version
	I0826 12:00:21.338563  149560 ssh_runner.go:195] Run: crio --version
	I0826 12:00:21.372702  149560 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:00:21.374089  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) Calling .GetIP
	I0826 12:00:21.377383  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:21.377828  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:52:f8", ip: ""} in network mk-kubernetes-upgrade-117510: {Iface:virbr2 ExpiryTime:2024-08-26 12:59:51 +0000 UTC Type:0 Mac:52:54:00:4f:52:f8 Iaid: IPaddr:192.168.50.121 Prefix:24 Hostname:kubernetes-upgrade-117510 Clientid:01:52:54:00:4f:52:f8}
	I0826 12:00:21.377859  149560 main.go:141] libmachine: (kubernetes-upgrade-117510) DBG | domain kubernetes-upgrade-117510 has defined IP address 192.168.50.121 and MAC address 52:54:00:4f:52:f8 in network mk-kubernetes-upgrade-117510
	I0826 12:00:21.378109  149560 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0826 12:00:21.382681  149560 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-117510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:00:21.382811  149560 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:00:21.382893  149560 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:00:21.429510  149560 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:00:21.429551  149560 crio.go:433] Images already preloaded, skipping extraction
	I0826 12:00:21.429625  149560 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:00:21.470722  149560 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:00:21.470749  149560 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:00:21.470758  149560 kubeadm.go:934] updating node { 192.168.50.121 8443 v1.31.0 crio true true} ...
	I0826 12:00:21.470869  149560 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-117510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:00:21.470937  149560 ssh_runner.go:195] Run: crio config
	I0826 12:00:21.534303  149560 cni.go:84] Creating CNI manager for ""
	I0826 12:00:21.534328  149560 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:00:21.534338  149560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:00:21.534368  149560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.121 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-117510 NodeName:kubernetes-upgrade-117510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:00:21.534735  149560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-117510"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:00:21.534822  149560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:00:21.546705  149560 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:00:21.546791  149560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:00:21.558252  149560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0826 12:00:21.579271  149560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:00:21.597901  149560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0826 12:00:21.617465  149560 ssh_runner.go:195] Run: grep 192.168.50.121	control-plane.minikube.internal$ /etc/hosts
	I0826 12:00:21.622813  149560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:00:21.762930  149560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:00:21.779621  149560 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510 for IP: 192.168.50.121
	I0826 12:00:21.779646  149560 certs.go:194] generating shared ca certs ...
	I0826 12:00:21.779667  149560 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:00:21.779854  149560 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:00:21.779919  149560 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:00:21.779931  149560 certs.go:256] generating profile certs ...
	I0826 12:00:21.780026  149560 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/client.key
	I0826 12:00:21.780109  149560 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.key.794b295d
	I0826 12:00:21.780164  149560 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.key
	I0826 12:00:21.780309  149560 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:00:21.780349  149560 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:00:21.780364  149560 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:00:21.780461  149560 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:00:21.780509  149560 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:00:21.780543  149560 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:00:21.780616  149560 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:00:21.781419  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:00:19.847489  149261 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:00:19.861068  149261 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:00:19.881236  149261 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:00:19.881348  149261 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0826 12:00:19.881373  149261 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0826 12:00:19.893295  149261 system_pods.go:59] 6 kube-system pods found
	I0826 12:00:19.893343  149261 system_pods.go:61] "coredns-6f6b679f8f-mrsqd" [87d3bb7c-c342-4e1d-a968-4bce3cffcd28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:00:19.893355  149261 system_pods.go:61] "etcd-pause-585941" [7b3e42bb-dfb8-4e1c-a207-58ad5b4db4a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:00:19.893364  149261 system_pods.go:61] "kube-apiserver-pause-585941" [d87291db-5d08-4821-b0ef-8c69ad30903a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:00:19.893374  149261 system_pods.go:61] "kube-controller-manager-pause-585941" [847f5f6f-0015-4dd3-a8c5-226b5f766d47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:00:19.893383  149261 system_pods.go:61] "kube-proxy-shqfk" [78f3c9d3-c561-4dc3-b495-19ef43f0d35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0826 12:00:19.893391  149261 system_pods.go:61] "kube-scheduler-pause-585941" [11ab98fe-0037-44ff-b5dc-93bf9609bfee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:00:19.893401  149261 system_pods.go:74] duration metric: took 12.136582ms to wait for pod list to return data ...
	I0826 12:00:19.893411  149261 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:00:19.903493  149261 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:00:19.903532  149261 node_conditions.go:123] node cpu capacity is 2
	I0826 12:00:19.903545  149261 node_conditions.go:105] duration metric: took 10.128149ms to run NodePressure ...
	I0826 12:00:19.903570  149261 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:00:20.232042  149261 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:00:20.236724  149261 kubeadm.go:739] kubelet initialised
	I0826 12:00:20.236748  149261 kubeadm.go:740] duration metric: took 4.677631ms waiting for restarted kubelet to initialise ...
	I0826 12:00:20.236757  149261 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:00:20.242933  149261 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mrsqd" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:22.258785  149261 pod_ready.go:103] pod "coredns-6f6b679f8f-mrsqd" in "kube-system" namespace has status "Ready":"False"
	I0826 12:00:21.808394  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:00:21.836254  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:00:21.862644  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:00:21.924805  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0826 12:00:21.989116  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:00:22.083150  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:00:22.221489  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/kubernetes-upgrade-117510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:00:22.273439  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:00:22.304035  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:00:22.331745  149560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:00:22.360441  149560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:00:22.381074  149560 ssh_runner.go:195] Run: openssl version
	I0826 12:00:22.389672  149560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:00:22.404832  149560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:00:22.409902  149560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:00:22.409983  149560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:00:22.416517  149560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:00:22.426702  149560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:00:22.438671  149560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:00:22.443939  149560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:00:22.444040  149560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:00:22.450162  149560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:00:22.460334  149560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:00:22.472131  149560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:00:22.476946  149560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:00:22.477036  149560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:00:22.483048  149560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:00:22.492466  149560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:00:22.497733  149560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:00:22.503664  149560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:00:22.509374  149560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:00:22.514952  149560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:00:22.520904  149560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:00:22.526867  149560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:00:22.532571  149560 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-117510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-117510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:00:22.532686  149560 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:00:22.532759  149560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:00:22.571229  149560 cri.go:89] found id: "0fceb982f69719f94563bdf306b6b6792f71059b5cd6e66cc23623c85cb52ee8"
	I0826 12:00:22.571261  149560 cri.go:89] found id: "660dca511c5a8935d7de3b6bf146c5e22f7771a8efa8fbf97771ec991b88d96d"
	I0826 12:00:22.571265  149560 cri.go:89] found id: "a248ffceb28c9e57de2c2b9e689f7cda7f6175896dbefe2225f5f5e10eb81815"
	I0826 12:00:22.571269  149560 cri.go:89] found id: "93f6bf45499a953e72bedcb72b6ffe9f14f8440449c75950901141399ac9d7ae"
	I0826 12:00:22.571272  149560 cri.go:89] found id: ""
	I0826 12:00:22.571323  149560 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.893073345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673631893049485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=337e8588-bf65-4c4c-887b-3a2cb2c9b0e4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.893545411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b09df715-f76b-46f6-a031-302f34861a56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.893606288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b09df715-f76b-46f6-a031-302f34861a56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.893820514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1ac9a5300e6979285a78f07725f79f83697c2920c2a46c4962af0c37fad85c0,PodSandboxId:082c6dd56ab91b9cdded5fc2387409c02af823d458cffd54b55851676fdb26f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673624683857318,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9fdeb593930ddcc9e0827787548a71c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa04e0d70a39cf55f40ca59ed43ee3d73cb0171ed0b0ab06af8bbb68722ec657,PodSandboxId:3c79c513ca0c46189e044ebf6b4b25349bc1518a678f9f8d7f729b4103cb94fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673624669002346,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce8fb5711d5829417feef7f2e3df3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09ec98c17b6a68657e0f9696bef4599fe6df9676614b9771a67cec72051359f8,PodSandboxId:2b954f6290abac2321550c62f4d014d69eb50f9c49846a3ee9fafe6570c3b009,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673624667781347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8e19094e861bdd480b7cdc8a5f0449,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bec20b14bafff55513e79d3fb8c32e925b11a768911f2c78bf8d129a2d97a7f6,PodSandboxId:3119ce998fdafd1287118166bf3ec8cc214bc69abae0a7506b5dfcc431776c02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673624648011394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ee9f771460a85d3ab31445064270f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fceb982f69719f94563bdf306b6b6792f71059b5cd6e66cc23623c85cb52ee8,PodSandboxId:63157f2dc7c0c21291436a4d716b958774ee8077a6ba16e86f1819d1bd926db9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673619220156955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9fdeb593930ddcc9e0827787548a71c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:660dca511c5a8935d7de3b6bf146c5e22f7771a8efa8fbf97771ec991b88d96d,PodSandboxId:6f443d192e339c7c6bdb9b00aa689ce7738395724030304e93efc9de5eb3abd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673619156657537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ee9f771460a85d3ab31445064270f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f6bf45499a953e72bedcb72b6ffe9f14f8440449c75950901141399ac9d7ae,PodSandboxId:7a98449146267e98795d062acf4f2efdc3fda91e0b134f1488a8463b420e5cad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673619058152638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8e19094e861bdd480b7cdc8a5f0449,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a248ffceb28c9e57de2c2b9e689f7cda7f6175896dbefe2225f5f5e10eb81815,PodSandboxId:e41dfddddb36ed6ed71f46a90e5b7b89642013ea88d332a4743d20a4792426dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673619059215054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce8fb5711d5829417feef7f2e3df3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b09df715-f76b-46f6-a031-302f34861a56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.935195137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0740bee9-322f-4fa0-bdaf-224d6f49cace name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.935290819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0740bee9-322f-4fa0-bdaf-224d6f49cace name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.936341877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5e625b7-1cee-4ee9-8d66-9da888bcb224 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.936837556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673631936810427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5e625b7-1cee-4ee9-8d66-9da888bcb224 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.937340757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19e276c2-7dda-4337-b8e9-813f71f5a1ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.937423697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19e276c2-7dda-4337-b8e9-813f71f5a1ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.937691836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1ac9a5300e6979285a78f07725f79f83697c2920c2a46c4962af0c37fad85c0,PodSandboxId:082c6dd56ab91b9cdded5fc2387409c02af823d458cffd54b55851676fdb26f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673624683857318,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9fdeb593930ddcc9e0827787548a71c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa04e0d70a39cf55f40ca59ed43ee3d73cb0171ed0b0ab06af8bbb68722ec657,PodSandboxId:3c79c513ca0c46189e044ebf6b4b25349bc1518a678f9f8d7f729b4103cb94fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673624669002346,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce8fb5711d5829417feef7f2e3df3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09ec98c17b6a68657e0f9696bef4599fe6df9676614b9771a67cec72051359f8,PodSandboxId:2b954f6290abac2321550c62f4d014d69eb50f9c49846a3ee9fafe6570c3b009,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673624667781347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8e19094e861bdd480b7cdc8a5f0449,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bec20b14bafff55513e79d3fb8c32e925b11a768911f2c78bf8d129a2d97a7f6,PodSandboxId:3119ce998fdafd1287118166bf3ec8cc214bc69abae0a7506b5dfcc431776c02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673624648011394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ee9f771460a85d3ab31445064270f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fceb982f69719f94563bdf306b6b6792f71059b5cd6e66cc23623c85cb52ee8,PodSandboxId:63157f2dc7c0c21291436a4d716b958774ee8077a6ba16e86f1819d1bd926db9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673619220156955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9fdeb593930ddcc9e0827787548a71c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:660dca511c5a8935d7de3b6bf146c5e22f7771a8efa8fbf97771ec991b88d96d,PodSandboxId:6f443d192e339c7c6bdb9b00aa689ce7738395724030304e93efc9de5eb3abd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673619156657537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ee9f771460a85d3ab31445064270f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f6bf45499a953e72bedcb72b6ffe9f14f8440449c75950901141399ac9d7ae,PodSandboxId:7a98449146267e98795d062acf4f2efdc3fda91e0b134f1488a8463b420e5cad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673619058152638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8e19094e861bdd480b7cdc8a5f0449,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a248ffceb28c9e57de2c2b9e689f7cda7f6175896dbefe2225f5f5e10eb81815,PodSandboxId:e41dfddddb36ed6ed71f46a90e5b7b89642013ea88d332a4743d20a4792426dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673619059215054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce8fb5711d5829417feef7f2e3df3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19e276c2-7dda-4337-b8e9-813f71f5a1ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.999119004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59bb957c-094b-42bf-a584-a150c5e415db name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:31 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:31.999209283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59bb957c-094b-42bf-a584-a150c5e415db name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.000951167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2969f16-1b18-4c74-9155-120a65307180 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.001423822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673632001392041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2969f16-1b18-4c74-9155-120a65307180 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.004377360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a671aa89-226e-465c-880d-bf4e4fd213ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.004451641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a671aa89-226e-465c-880d-bf4e4fd213ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.004726316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1ac9a5300e6979285a78f07725f79f83697c2920c2a46c4962af0c37fad85c0,PodSandboxId:082c6dd56ab91b9cdded5fc2387409c02af823d458cffd54b55851676fdb26f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673624683857318,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9fdeb593930ddcc9e0827787548a71c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa04e0d70a39cf55f40ca59ed43ee3d73cb0171ed0b0ab06af8bbb68722ec657,PodSandboxId:3c79c513ca0c46189e044ebf6b4b25349bc1518a678f9f8d7f729b4103cb94fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673624669002346,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce8fb5711d5829417feef7f2e3df3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09ec98c17b6a68657e0f9696bef4599fe6df9676614b9771a67cec72051359f8,PodSandboxId:2b954f6290abac2321550c62f4d014d69eb50f9c49846a3ee9fafe6570c3b009,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673624667781347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8e19094e861bdd480b7cdc8a5f0449,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bec20b14bafff55513e79d3fb8c32e925b11a768911f2c78bf8d129a2d97a7f6,PodSandboxId:3119ce998fdafd1287118166bf3ec8cc214bc69abae0a7506b5dfcc431776c02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673624648011394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ee9f771460a85d3ab31445064270f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fceb982f69719f94563bdf306b6b6792f71059b5cd6e66cc23623c85cb52ee8,PodSandboxId:63157f2dc7c0c21291436a4d716b958774ee8077a6ba16e86f1819d1bd926db9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673619220156955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9fdeb593930ddcc9e0827787548a71c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:660dca511c5a8935d7de3b6bf146c5e22f7771a8efa8fbf97771ec991b88d96d,PodSandboxId:6f443d192e339c7c6bdb9b00aa689ce7738395724030304e93efc9de5eb3abd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673619156657537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ee9f771460a85d3ab31445064270f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f6bf45499a953e72bedcb72b6ffe9f14f8440449c75950901141399ac9d7ae,PodSandboxId:7a98449146267e98795d062acf4f2efdc3fda91e0b134f1488a8463b420e5cad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673619058152638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8e19094e861bdd480b7cdc8a5f0449,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a248ffceb28c9e57de2c2b9e689f7cda7f6175896dbefe2225f5f5e10eb81815,PodSandboxId:e41dfddddb36ed6ed71f46a90e5b7b89642013ea88d332a4743d20a4792426dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673619059215054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce8fb5711d5829417feef7f2e3df3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a671aa89-226e-465c-880d-bf4e4fd213ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.070465101Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a00f1de0-006a-48ed-95b7-427af8ddd8be name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.070602436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a00f1de0-006a-48ed-95b7-427af8ddd8be name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.072054408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9877200f-3634-40a4-8201-530bbcc886b0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.072442213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673632072417801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9877200f-3634-40a4-8201-530bbcc886b0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.074252793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02172dd4-c7db-4890-83d4-b79cbe2536eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.074320150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02172dd4-c7db-4890-83d4-b79cbe2536eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:32 kubernetes-upgrade-117510 crio[1883]: time="2024-08-26 12:00:32.074588164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1ac9a5300e6979285a78f07725f79f83697c2920c2a46c4962af0c37fad85c0,PodSandboxId:082c6dd56ab91b9cdded5fc2387409c02af823d458cffd54b55851676fdb26f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673624683857318,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9fdeb593930ddcc9e0827787548a71c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa04e0d70a39cf55f40ca59ed43ee3d73cb0171ed0b0ab06af8bbb68722ec657,PodSandboxId:3c79c513ca0c46189e044ebf6b4b25349bc1518a678f9f8d7f729b4103cb94fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673624669002346,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce8fb5711d5829417feef7f2e3df3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09ec98c17b6a68657e0f9696bef4599fe6df9676614b9771a67cec72051359f8,PodSandboxId:2b954f6290abac2321550c62f4d014d69eb50f9c49846a3ee9fafe6570c3b009,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673624667781347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8e19094e861bdd480b7cdc8a5f0449,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bec20b14bafff55513e79d3fb8c32e925b11a768911f2c78bf8d129a2d97a7f6,PodSandboxId:3119ce998fdafd1287118166bf3ec8cc214bc69abae0a7506b5dfcc431776c02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673624648011394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ee9f771460a85d3ab31445064270f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fceb982f69719f94563bdf306b6b6792f71059b5cd6e66cc23623c85cb52ee8,PodSandboxId:63157f2dc7c0c21291436a4d716b958774ee8077a6ba16e86f1819d1bd926db9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673619220156955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9fdeb593930ddcc9e0827787548a71c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:660dca511c5a8935d7de3b6bf146c5e22f7771a8efa8fbf97771ec991b88d96d,PodSandboxId:6f443d192e339c7c6bdb9b00aa689ce7738395724030304e93efc9de5eb3abd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673619156657537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ee9f771460a85d3ab31445064270f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f6bf45499a953e72bedcb72b6ffe9f14f8440449c75950901141399ac9d7ae,PodSandboxId:7a98449146267e98795d062acf4f2efdc3fda91e0b134f1488a8463b420e5cad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673619058152638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8e19094e861bdd480b7cdc8a5f0449,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a248ffceb28c9e57de2c2b9e689f7cda7f6175896dbefe2225f5f5e10eb81815,PodSandboxId:e41dfddddb36ed6ed71f46a90e5b7b89642013ea88d332a4743d20a4792426dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673619059215054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-117510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce8fb5711d5829417feef7f2e3df3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02172dd4-c7db-4890-83d4-b79cbe2536eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1ac9a5300e69       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago       Running             kube-apiserver            2                   082c6dd56ab91       kube-apiserver-kubernetes-upgrade-117510
	fa04e0d70a39c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   7 seconds ago       Running             kube-controller-manager   2                   3c79c513ca0c4       kube-controller-manager-kubernetes-upgrade-117510
	09ec98c17b6a6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   2b954f6290aba       etcd-kubernetes-upgrade-117510
	bec20b14bafff       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   7 seconds ago       Running             kube-scheduler            2                   3119ce998fdaf       kube-scheduler-kubernetes-upgrade-117510
	0fceb982f6971       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   12 seconds ago      Exited              kube-apiserver            1                   63157f2dc7c0c       kube-apiserver-kubernetes-upgrade-117510
	660dca511c5a8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   13 seconds ago      Exited              kube-scheduler            1                   6f443d192e339       kube-scheduler-kubernetes-upgrade-117510
	a248ffceb28c9       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   13 seconds ago      Exited              kube-controller-manager   1                   e41dfddddb36e       kube-controller-manager-kubernetes-upgrade-117510
	93f6bf45499a9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   13 seconds ago      Exited              etcd                      1                   7a98449146267       etcd-kubernetes-upgrade-117510
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-117510
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-117510
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 12:00:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-117510
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:00:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:00:28 +0000   Mon, 26 Aug 2024 12:00:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:00:28 +0000   Mon, 26 Aug 2024 12:00:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:00:28 +0000   Mon, 26 Aug 2024 12:00:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:00:28 +0000   Mon, 26 Aug 2024 12:00:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.121
	  Hostname:    kubernetes-upgrade-117510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f46a21f2cdb44a7aa95f8ba6bc008c73
	  System UUID:                f46a21f2-cdb4-4a7a-a95f-8ba6bc008c73
	  Boot ID:                    20e3ed36-d9f9-4e7b-beb0-26cb7f2c305e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (5 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-117510                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16s
	  kube-system                 kube-apiserver-kubernetes-upgrade-117510             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-117510    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-scheduler-kubernetes-upgrade-117510             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (4%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node kubernetes-upgrade-117510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node kubernetes-upgrade-117510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node kubernetes-upgrade-117510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-117510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-117510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-117510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-117510 event: Registered Node kubernetes-upgrade-117510 in Controller
	
	
	==> dmesg <==
	[  +2.035010] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.596330] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug26 12:00] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.061851] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070276] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.241284] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.112368] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.306136] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +4.165063] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +2.149516] systemd-fstab-generator[866]: Ignoring "noauto" option for root device
	[  +0.061124] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.218058] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.091354] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.986785] systemd-fstab-generator[1802]: Ignoring "noauto" option for root device
	[  +0.218889] systemd-fstab-generator[1814]: Ignoring "noauto" option for root device
	[  +0.208524] systemd-fstab-generator[1828]: Ignoring "noauto" option for root device
	[  +0.163020] systemd-fstab-generator[1840]: Ignoring "noauto" option for root device
	[  +0.359846] systemd-fstab-generator[1870]: Ignoring "noauto" option for root device
	[  +0.085290] kauditd_printk_skb: 170 callbacks suppressed
	[  +0.821410] systemd-fstab-generator[2057]: Ignoring "noauto" option for root device
	[  +2.240878] systemd-fstab-generator[2325]: Ignoring "noauto" option for root device
	[  +6.240124] systemd-fstab-generator[2586]: Ignoring "noauto" option for root device
	[  +0.088925] kauditd_printk_skb: 119 callbacks suppressed
	
	
	==> etcd [09ec98c17b6a68657e0f9696bef4599fe6df9676614b9771a67cec72051359f8] <==
	{"level":"info","ts":"2024-08-26T12:00:25.041941Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c6c591ed8ab34e6","local-member-id":"c8741c8bf142da73","added-peer-id":"c8741c8bf142da73","added-peer-peer-urls":["https://192.168.50.121:2380"]}
	{"level":"info","ts":"2024-08-26T12:00:25.042031Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c6c591ed8ab34e6","local-member-id":"c8741c8bf142da73","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:00:25.042070Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:00:25.045671Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:25.049961Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T12:00:25.050302Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c8741c8bf142da73","initial-advertise-peer-urls":["https://192.168.50.121:2380"],"listen-peer-urls":["https://192.168.50.121:2380"],"advertise-client-urls":["https://192.168.50.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T12:00:25.052606Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T12:00:25.052743Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.121:2380"}
	{"level":"info","ts":"2024-08-26T12:00:25.052771Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.121:2380"}
	{"level":"info","ts":"2024-08-26T12:00:26.898878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:26.898990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:26.899039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 received MsgPreVoteResp from c8741c8bf142da73 at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:26.899077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 became candidate at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:26.899101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 received MsgVoteResp from c8741c8bf142da73 at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:26.899128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 became leader at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:26.899153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c8741c8bf142da73 elected leader c8741c8bf142da73 at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:26.904567Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c8741c8bf142da73","local-member-attributes":"{Name:kubernetes-upgrade-117510 ClientURLs:[https://192.168.50.121:2379]}","request-path":"/0/members/c8741c8bf142da73/attributes","cluster-id":"c6c591ed8ab34e6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T12:00:26.904602Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:00:26.904699Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:00:26.905273Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T12:00:26.905309Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T12:00:26.905837Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:26.905988Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:26.906750Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T12:00:26.906773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.121:2379"}
	
	
	==> etcd [93f6bf45499a953e72bedcb72b6ffe9f14f8440449c75950901141399ac9d7ae] <==
	{"level":"info","ts":"2024-08-26T12:00:19.489717Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-26T12:00:19.527897Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c6c591ed8ab34e6","local-member-id":"c8741c8bf142da73","commit-index":297}
	{"level":"info","ts":"2024-08-26T12:00:19.527986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-26T12:00:19.528009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 became follower at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:19.528018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c8741c8bf142da73 [peers: [], term: 2, commit: 297, applied: 0, lastindex: 297, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-26T12:00:19.530195Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-26T12:00:19.538661Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":291}
	{"level":"info","ts":"2024-08-26T12:00:19.573629Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-26T12:00:19.578320Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"c8741c8bf142da73","timeout":"7s"}
	{"level":"info","ts":"2024-08-26T12:00:19.579668Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"c8741c8bf142da73"}
	{"level":"info","ts":"2024-08-26T12:00:19.579729Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"c8741c8bf142da73","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-26T12:00:19.580288Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:19.585702Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-26T12:00:19.585906Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-26T12:00:19.585958Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-26T12:00:19.585968Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-26T12:00:19.586172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8741c8bf142da73 switched to configuration voters=(14444201292257745523)"}
	{"level":"info","ts":"2024-08-26T12:00:19.586256Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c6c591ed8ab34e6","local-member-id":"c8741c8bf142da73","added-peer-id":"c8741c8bf142da73","added-peer-peer-urls":["https://192.168.50.121:2380"]}
	{"level":"info","ts":"2024-08-26T12:00:19.586352Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c6c591ed8ab34e6","local-member-id":"c8741c8bf142da73","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:00:19.586405Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:00:19.594318Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T12:00:19.594635Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c8741c8bf142da73","initial-advertise-peer-urls":["https://192.168.50.121:2380"],"listen-peer-urls":["https://192.168.50.121:2380"],"advertise-client-urls":["https://192.168.50.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T12:00:19.594687Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T12:00:19.595102Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.121:2380"}
	{"level":"info","ts":"2024-08-26T12:00:19.597561Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.121:2380"}
	
	
	==> kernel <==
	 12:00:32 up 0 min,  0 users,  load average: 0.99, 0.24, 0.08
	Linux kubernetes-upgrade-117510 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0fceb982f69719f94563bdf306b6b6792f71059b5cd6e66cc23623c85cb52ee8] <==
	I0826 12:00:19.593245       1 options.go:228] external host was not specified, using 192.168.50.121
	I0826 12:00:19.607674       1 server.go:142] Version: v1.31.0
	I0826 12:00:19.607778       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:00:20.591025       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0826 12:00:20.618104       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 12:00:20.626594       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0826 12:00:20.626628       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0826 12:00:20.626816       1 instance.go:232] Using reconciler: lease
	W0826 12:00:20.894180       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45542->127.0.0.1:2379: read: connection reset by peer"
	W0826 12:00:20.894191       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45540->127.0.0.1:2379: read: connection reset by peer"
	W0826 12:00:20.894328       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:45534->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-apiserver [a1ac9a5300e6979285a78f07725f79f83697c2920c2a46c4962af0c37fad85c0] <==
	I0826 12:00:28.253655       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0826 12:00:28.253781       1 shared_informer.go:320] Caches are synced for configmaps
	I0826 12:00:28.253835       1 aggregator.go:171] initial CRD sync complete...
	I0826 12:00:28.253859       1 autoregister_controller.go:144] Starting autoregister controller
	I0826 12:00:28.253881       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0826 12:00:28.253903       1 cache.go:39] Caches are synced for autoregister controller
	I0826 12:00:28.272755       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0826 12:00:28.277302       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 12:00:28.277389       1 policy_source.go:224] refreshing policies
	I0826 12:00:28.346136       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0826 12:00:28.346226       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0826 12:00:28.346349       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0826 12:00:28.348310       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 12:00:28.351014       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0826 12:00:28.351223       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0826 12:00:28.364987       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0826 12:00:29.146592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0826 12:00:29.873613       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0826 12:00:29.890847       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0826 12:00:29.929224       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0826 12:00:30.051663       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0826 12:00:30.059614       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0826 12:00:32.286750       1 controller.go:615] quota admission added evaluator for: endpoints
	I0826 12:00:32.468189       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0826 12:00:32.655882       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a248ffceb28c9e57de2c2b9e689f7cda7f6175896dbefe2225f5f5e10eb81815] <==
	
	
	==> kube-controller-manager [fa04e0d70a39cf55f40ca59ed43ee3d73cb0171ed0b0ab06af8bbb68722ec657] <==
	I0826 12:00:31.964295       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0826 12:00:31.964374       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-117510"
	I0826 12:00:31.964433       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0826 12:00:31.964625       1 shared_informer.go:320] Caches are synced for job
	I0826 12:00:31.964662       1 shared_informer.go:320] Caches are synced for TTL
	I0826 12:00:31.994951       1 shared_informer.go:320] Caches are synced for node
	I0826 12:00:31.995078       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0826 12:00:31.995139       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0826 12:00:31.995157       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0826 12:00:31.995164       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0826 12:00:31.995772       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0826 12:00:31.995863       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-117510"
	I0826 12:00:31.999431       1 shared_informer.go:320] Caches are synced for persistent volume
	I0826 12:00:32.009313       1 shared_informer.go:320] Caches are synced for daemon sets
	I0826 12:00:32.012979       1 shared_informer.go:320] Caches are synced for GC
	I0826 12:00:32.019651       1 shared_informer.go:320] Caches are synced for resource quota
	I0826 12:00:32.030808       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-117510" podCIDRs=["10.244.0.0/24"]
	I0826 12:00:32.030862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-117510"
	I0826 12:00:32.031227       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-117510"
	I0826 12:00:32.032439       1 shared_informer.go:320] Caches are synced for cronjob
	I0826 12:00:32.046599       1 shared_informer.go:320] Caches are synced for resource quota
	I0826 12:00:32.461048       1 shared_informer.go:320] Caches are synced for garbage collector
	I0826 12:00:32.461071       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0826 12:00:32.475988       1 shared_informer.go:320] Caches are synced for garbage collector
	I0826 12:00:32.650441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-117510"
	
	
	==> kube-scheduler [660dca511c5a8935d7de3b6bf146c5e22f7771a8efa8fbf97771ec991b88d96d] <==
	
	
	==> kube-scheduler [bec20b14bafff55513e79d3fb8c32e925b11a768911f2c78bf8d129a2d97a7f6] <==
	I0826 12:00:25.456603       1 serving.go:386] Generated self-signed cert in-memory
	W0826 12:00:28.196223       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0826 12:00:28.196308       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 12:00:28.196318       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0826 12:00:28.196324       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0826 12:00:28.254359       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0826 12:00:28.254529       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:00:28.257038       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0826 12:00:28.257092       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 12:00:28.257257       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0826 12:00:28.257336       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0826 12:00:28.358715       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:00:24 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:24.911445    2332 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-117510"
	Aug 26 12:00:24 kubernetes-upgrade-117510 kubelet[2332]: E0826 12:00:24.912588    2332 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.121:8443: connect: connection refused" node="kubernetes-upgrade-117510"
	Aug 26 12:00:25 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:25.714538    2332 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-117510"
	Aug 26 12:00:28 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:28.289074    2332 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-117510"
	Aug 26 12:00:28 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:28.289645    2332 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-117510"
	Aug 26 12:00:29 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:29.130807    2332 apiserver.go:52] "Watching apiserver"
	Aug 26 12:00:29 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:29.137677    2332 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.100172    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/141f7036-013d-4d48-afe8-5a948192d264-tmp\") pod \"storage-provisioner\" (UID: \"141f7036-013d-4d48-afe8-5a948192d264\") " pod="kube-system/storage-provisioner"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.100229    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lggd\" (UniqueName: \"kubernetes.io/projected/141f7036-013d-4d48-afe8-5a948192d264-kube-api-access-7lggd\") pod \"storage-provisioner\" (UID: \"141f7036-013d-4d48-afe8-5a948192d264\") " pod="kube-system/storage-provisioner"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.100921    2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-117510" podStartSLOduration=18.100889318 podStartE2EDuration="18.100889318s" podCreationTimestamp="2024-08-26 12:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-26 12:00:32.073844753 +0000 UTC m=+8.051169674" watchObservedRunningTime="2024-08-26 12:00:32.100889318 +0000 UTC m=+8.078214254"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.124807    2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-117510" podStartSLOduration=16.124787027 podStartE2EDuration="16.124787027s" podCreationTimestamp="2024-08-26 12:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-26 12:00:32.124423401 +0000 UTC m=+8.101748334" watchObservedRunningTime="2024-08-26 12:00:32.124787027 +0000 UTC m=+8.102111960"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.124974    2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-117510" podStartSLOduration=18.124969214 podStartE2EDuration="18.124969214s" podCreationTimestamp="2024-08-26 12:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-26 12:00:32.101737411 +0000 UTC m=+8.079062351" watchObservedRunningTime="2024-08-26 12:00:32.124969214 +0000 UTC m=+8.102294152"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.179383    2332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-kubernetes-upgrade-117510" podStartSLOduration=4.17936548 podStartE2EDuration="4.17936548s" podCreationTimestamp="2024-08-26 12:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-26 12:00:32.141938874 +0000 UTC m=+8.119263812" watchObservedRunningTime="2024-08-26 12:00:32.17936548 +0000 UTC m=+8.156690422"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: E0826 12:00:32.221823    2332 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: E0826 12:00:32.221876    2332 projected.go:194] Error preparing data for projected volume kube-api-access-7lggd for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: E0826 12:00:32.222021    2332 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/141f7036-013d-4d48-afe8-5a948192d264-kube-api-access-7lggd podName:141f7036-013d-4d48-afe8-5a948192d264 nodeName:}" failed. No retries permitted until 2024-08-26 12:00:32.721926757 +0000 UTC m=+8.699251692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7lggd" (UniqueName: "kubernetes.io/projected/141f7036-013d-4d48-afe8-5a948192d264-kube-api-access-7lggd") pod "storage-provisioner" (UID: "141f7036-013d-4d48-afe8-5a948192d264") : configmap "kube-root-ca.crt" not found
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.809775    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bacf5a67-14c5-461e-be65-c9f5a76a5d94-kube-proxy\") pod \"kube-proxy-gbclb\" (UID: \"bacf5a67-14c5-461e-be65-c9f5a76a5d94\") " pod="kube-system/kube-proxy-gbclb"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.809869    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bacf5a67-14c5-461e-be65-c9f5a76a5d94-lib-modules\") pod \"kube-proxy-gbclb\" (UID: \"bacf5a67-14c5-461e-be65-c9f5a76a5d94\") " pod="kube-system/kube-proxy-gbclb"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.809895    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dss4\" (UniqueName: \"kubernetes.io/projected/bacf5a67-14c5-461e-be65-c9f5a76a5d94-kube-api-access-5dss4\") pod \"kube-proxy-gbclb\" (UID: \"bacf5a67-14c5-461e-be65-c9f5a76a5d94\") " pod="kube-system/kube-proxy-gbclb"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.809982    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bacf5a67-14c5-461e-be65-c9f5a76a5d94-xtables-lock\") pod \"kube-proxy-gbclb\" (UID: \"bacf5a67-14c5-461e-be65-c9f5a76a5d94\") " pod="kube-system/kube-proxy-gbclb"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.810720    2332 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.911026    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881b973e-e4ae-4228-9a46-59ccb35852be-config-volume\") pod \"coredns-6f6b679f8f-r5fwf\" (UID: \"881b973e-e4ae-4228-9a46-59ccb35852be\") " pod="kube-system/coredns-6f6b679f8f-r5fwf"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.911093    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/570ecc12-d8e6-404f-a339-b3b6ccf8bab5-config-volume\") pod \"coredns-6f6b679f8f-95xxr\" (UID: \"570ecc12-d8e6-404f-a339-b3b6ccf8bab5\") " pod="kube-system/coredns-6f6b679f8f-95xxr"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.911124    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvqj\" (UniqueName: \"kubernetes.io/projected/881b973e-e4ae-4228-9a46-59ccb35852be-kube-api-access-pmvqj\") pod \"coredns-6f6b679f8f-r5fwf\" (UID: \"881b973e-e4ae-4228-9a46-59ccb35852be\") " pod="kube-system/coredns-6f6b679f8f-r5fwf"
	Aug 26 12:00:32 kubernetes-upgrade-117510 kubelet[2332]: I0826 12:00:32.911167    2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxmz\" (UniqueName: \"kubernetes.io/projected/570ecc12-d8e6-404f-a339-b3b6ccf8bab5-kube-api-access-4wxmz\") pod \"coredns-6f6b679f8f-95xxr\" (UID: \"570ecc12-d8e6-404f-a339-b3b6ccf8bab5\") " pod="kube-system/coredns-6f6b679f8f-95xxr"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:00:31.498909  149721 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19501-99403/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-117510 -n kubernetes-upgrade-117510
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-117510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-6f6b679f8f-95xxr coredns-6f6b679f8f-r5fwf kube-proxy-gbclb
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-117510 describe pod coredns-6f6b679f8f-95xxr coredns-6f6b679f8f-r5fwf kube-proxy-gbclb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-117510 describe pod coredns-6f6b679f8f-95xxr coredns-6f6b679f8f-r5fwf kube-proxy-gbclb: exit status 1 (80.977869ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6f6b679f8f-95xxr" not found
	Error from server (NotFound): pods "coredns-6f6b679f8f-r5fwf" not found
	Error from server (NotFound): pods "kube-proxy-gbclb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-117510 describe pod coredns-6f6b679f8f-95xxr coredns-6f6b679f8f-r5fwf kube-proxy-gbclb: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-117510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-117510
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-117510: (1.136483318s)
--- FAIL: TestKubernetesUpgrade (387.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (292.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-839656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-839656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m51.82433844s)

                                                
                                                
-- stdout --
	* [old-k8s-version-839656] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-839656" primary control-plane node in "old-k8s-version-839656" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:58:54.191707  148739 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:58:54.191828  148739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:58:54.191837  148739 out.go:358] Setting ErrFile to fd 2...
	I0826 11:58:54.191847  148739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:58:54.192032  148739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:58:54.192656  148739 out.go:352] Setting JSON to false
	I0826 11:58:54.193655  148739 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6075,"bootTime":1724667459,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:58:54.193729  148739 start.go:139] virtualization: kvm guest
	I0826 11:58:54.196243  148739 out.go:177] * [old-k8s-version-839656] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:58:54.197776  148739 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:58:54.197856  148739 notify.go:220] Checking for updates...
	I0826 11:58:54.200097  148739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:58:54.201539  148739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:58:54.202817  148739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:58:54.204380  148739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:58:54.205878  148739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:58:54.207736  148739 config.go:182] Loaded profile config "cert-expiration-156240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:58:54.207842  148739 config.go:182] Loaded profile config "kubernetes-upgrade-117510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 11:58:54.207934  148739 config.go:182] Loaded profile config "pause-585941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:58:54.208031  148739 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:58:54.248483  148739 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 11:58:54.249702  148739 start.go:297] selected driver: kvm2
	I0826 11:58:54.249717  148739 start.go:901] validating driver "kvm2" against <nil>
	I0826 11:58:54.249729  148739 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:58:54.250415  148739 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:58:54.250497  148739 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:58:54.267689  148739 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:58:54.267756  148739 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 11:58:54.267988  148739 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 11:58:54.268055  148739 cni.go:84] Creating CNI manager for ""
	I0826 11:58:54.268071  148739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 11:58:54.268081  148739 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 11:58:54.268134  148739 start.go:340] cluster config:
	{Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:58:54.268227  148739 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:58:54.270239  148739 out.go:177] * Starting "old-k8s-version-839656" primary control-plane node in "old-k8s-version-839656" cluster
	I0826 11:58:54.271586  148739 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 11:58:54.271634  148739 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:58:54.271646  148739 cache.go:56] Caching tarball of preloaded images
	I0826 11:58:54.271734  148739 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:58:54.271750  148739 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0826 11:58:54.271902  148739 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 11:58:54.271933  148739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json: {Name:mke00e01c3af222e02a01a5227ede4e50e8a2af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:58:54.272122  148739 start.go:360] acquireMachinesLock for old-k8s-version-839656: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 11:59:13.884052  148739 start.go:364] duration metric: took 19.611882572s to acquireMachinesLock for "old-k8s-version-839656"
	I0826 11:59:13.884142  148739 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 11:59:13.884288  148739 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 11:59:13.886663  148739 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0826 11:59:13.886909  148739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:59:13.886945  148739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:59:13.904416  148739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0826 11:59:13.904846  148739 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:59:13.905432  148739 main.go:141] libmachine: Using API Version  1
	I0826 11:59:13.905460  148739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:59:13.905832  148739 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:59:13.906086  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 11:59:13.906248  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 11:59:13.906423  148739 start.go:159] libmachine.API.Create for "old-k8s-version-839656" (driver="kvm2")
	I0826 11:59:13.906450  148739 client.go:168] LocalClient.Create starting
	I0826 11:59:13.906490  148739 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 11:59:13.906535  148739 main.go:141] libmachine: Decoding PEM data...
	I0826 11:59:13.906554  148739 main.go:141] libmachine: Parsing certificate...
	I0826 11:59:13.906609  148739 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 11:59:13.906627  148739 main.go:141] libmachine: Decoding PEM data...
	I0826 11:59:13.906639  148739 main.go:141] libmachine: Parsing certificate...
	I0826 11:59:13.906665  148739 main.go:141] libmachine: Running pre-create checks...
	I0826 11:59:13.906679  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .PreCreateCheck
	I0826 11:59:13.907048  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 11:59:13.907469  148739 main.go:141] libmachine: Creating machine...
	I0826 11:59:13.907484  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .Create
	I0826 11:59:13.907615  148739 main.go:141] libmachine: (old-k8s-version-839656) Creating KVM machine...
	I0826 11:59:13.908891  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found existing default KVM network
	I0826 11:59:13.910738  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:13.910577  148867 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:6b:e4} reservation:<nil>}
	I0826 11:59:13.911699  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:13.911606  148867 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:90:77:33} reservation:<nil>}
	I0826 11:59:13.912611  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:13.912526  148867 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:28:b5:d2} reservation:<nil>}
	I0826 11:59:13.913839  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:13.913752  148867 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002abdd0}
	I0826 11:59:13.913876  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | created network xml: 
	I0826 11:59:13.913889  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | <network>
	I0826 11:59:13.913899  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |   <name>mk-old-k8s-version-839656</name>
	I0826 11:59:13.913905  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |   <dns enable='no'/>
	I0826 11:59:13.913913  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |   
	I0826 11:59:13.913918  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0826 11:59:13.913930  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |     <dhcp>
	I0826 11:59:13.913936  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0826 11:59:13.913945  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |     </dhcp>
	I0826 11:59:13.913959  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |   </ip>
	I0826 11:59:13.913972  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG |   
	I0826 11:59:13.913981  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | </network>
	I0826 11:59:13.913988  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | 
	I0826 11:59:13.919379  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | trying to create private KVM network mk-old-k8s-version-839656 192.168.72.0/24...
	I0826 11:59:13.994303  148739 main.go:141] libmachine: (old-k8s-version-839656) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656 ...
	I0826 11:59:13.994338  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | private KVM network mk-old-k8s-version-839656 192.168.72.0/24 created
	I0826 11:59:13.994349  148739 main.go:141] libmachine: (old-k8s-version-839656) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 11:59:13.994371  148739 main.go:141] libmachine: (old-k8s-version-839656) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 11:59:13.994384  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:13.993226  148867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:59:14.253492  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:14.253379  148867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa...
	I0826 11:59:14.462260  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:14.462144  148867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/old-k8s-version-839656.rawdisk...
	I0826 11:59:14.462290  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Writing magic tar header
	I0826 11:59:14.462304  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Writing SSH key tar header
	I0826 11:59:14.462318  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:14.462263  148867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656 ...
	I0826 11:59:14.462409  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656
	I0826 11:59:14.462443  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 11:59:14.462456  148739 main.go:141] libmachine: (old-k8s-version-839656) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656 (perms=drwx------)
	I0826 11:59:14.462466  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:59:14.462478  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 11:59:14.462487  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 11:59:14.462502  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Checking permissions on dir: /home/jenkins
	I0826 11:59:14.462518  148739 main.go:141] libmachine: (old-k8s-version-839656) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 11:59:14.462530  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Checking permissions on dir: /home
	I0826 11:59:14.462543  148739 main.go:141] libmachine: (old-k8s-version-839656) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 11:59:14.462552  148739 main.go:141] libmachine: (old-k8s-version-839656) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 11:59:14.462566  148739 main.go:141] libmachine: (old-k8s-version-839656) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 11:59:14.462580  148739 main.go:141] libmachine: (old-k8s-version-839656) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 11:59:14.462592  148739 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 11:59:14.462604  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Skipping /home - not owner
	I0826 11:59:14.463840  148739 main.go:141] libmachine: (old-k8s-version-839656) define libvirt domain using xml: 
	I0826 11:59:14.463867  148739 main.go:141] libmachine: (old-k8s-version-839656) <domain type='kvm'>
	I0826 11:59:14.463878  148739 main.go:141] libmachine: (old-k8s-version-839656)   <name>old-k8s-version-839656</name>
	I0826 11:59:14.463886  148739 main.go:141] libmachine: (old-k8s-version-839656)   <memory unit='MiB'>2200</memory>
	I0826 11:59:14.463898  148739 main.go:141] libmachine: (old-k8s-version-839656)   <vcpu>2</vcpu>
	I0826 11:59:14.463905  148739 main.go:141] libmachine: (old-k8s-version-839656)   <features>
	I0826 11:59:14.463914  148739 main.go:141] libmachine: (old-k8s-version-839656)     <acpi/>
	I0826 11:59:14.463926  148739 main.go:141] libmachine: (old-k8s-version-839656)     <apic/>
	I0826 11:59:14.463939  148739 main.go:141] libmachine: (old-k8s-version-839656)     <pae/>
	I0826 11:59:14.463949  148739 main.go:141] libmachine: (old-k8s-version-839656)     
	I0826 11:59:14.463976  148739 main.go:141] libmachine: (old-k8s-version-839656)   </features>
	I0826 11:59:14.463991  148739 main.go:141] libmachine: (old-k8s-version-839656)   <cpu mode='host-passthrough'>
	I0826 11:59:14.463999  148739 main.go:141] libmachine: (old-k8s-version-839656)   
	I0826 11:59:14.464010  148739 main.go:141] libmachine: (old-k8s-version-839656)   </cpu>
	I0826 11:59:14.464018  148739 main.go:141] libmachine: (old-k8s-version-839656)   <os>
	I0826 11:59:14.464029  148739 main.go:141] libmachine: (old-k8s-version-839656)     <type>hvm</type>
	I0826 11:59:14.464040  148739 main.go:141] libmachine: (old-k8s-version-839656)     <boot dev='cdrom'/>
	I0826 11:59:14.464054  148739 main.go:141] libmachine: (old-k8s-version-839656)     <boot dev='hd'/>
	I0826 11:59:14.464084  148739 main.go:141] libmachine: (old-k8s-version-839656)     <bootmenu enable='no'/>
	I0826 11:59:14.464108  148739 main.go:141] libmachine: (old-k8s-version-839656)   </os>
	I0826 11:59:14.464122  148739 main.go:141] libmachine: (old-k8s-version-839656)   <devices>
	I0826 11:59:14.464134  148739 main.go:141] libmachine: (old-k8s-version-839656)     <disk type='file' device='cdrom'>
	I0826 11:59:14.464155  148739 main.go:141] libmachine: (old-k8s-version-839656)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/boot2docker.iso'/>
	I0826 11:59:14.464168  148739 main.go:141] libmachine: (old-k8s-version-839656)       <target dev='hdc' bus='scsi'/>
	I0826 11:59:14.464178  148739 main.go:141] libmachine: (old-k8s-version-839656)       <readonly/>
	I0826 11:59:14.464190  148739 main.go:141] libmachine: (old-k8s-version-839656)     </disk>
	I0826 11:59:14.464204  148739 main.go:141] libmachine: (old-k8s-version-839656)     <disk type='file' device='disk'>
	I0826 11:59:14.464219  148739 main.go:141] libmachine: (old-k8s-version-839656)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 11:59:14.464240  148739 main.go:141] libmachine: (old-k8s-version-839656)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/old-k8s-version-839656.rawdisk'/>
	I0826 11:59:14.464248  148739 main.go:141] libmachine: (old-k8s-version-839656)       <target dev='hda' bus='virtio'/>
	I0826 11:59:14.464259  148739 main.go:141] libmachine: (old-k8s-version-839656)     </disk>
	I0826 11:59:14.464268  148739 main.go:141] libmachine: (old-k8s-version-839656)     <interface type='network'>
	I0826 11:59:14.464281  148739 main.go:141] libmachine: (old-k8s-version-839656)       <source network='mk-old-k8s-version-839656'/>
	I0826 11:59:14.464289  148739 main.go:141] libmachine: (old-k8s-version-839656)       <model type='virtio'/>
	I0826 11:59:14.464300  148739 main.go:141] libmachine: (old-k8s-version-839656)     </interface>
	I0826 11:59:14.464315  148739 main.go:141] libmachine: (old-k8s-version-839656)     <interface type='network'>
	I0826 11:59:14.464327  148739 main.go:141] libmachine: (old-k8s-version-839656)       <source network='default'/>
	I0826 11:59:14.464338  148739 main.go:141] libmachine: (old-k8s-version-839656)       <model type='virtio'/>
	I0826 11:59:14.464350  148739 main.go:141] libmachine: (old-k8s-version-839656)     </interface>
	I0826 11:59:14.464361  148739 main.go:141] libmachine: (old-k8s-version-839656)     <serial type='pty'>
	I0826 11:59:14.464375  148739 main.go:141] libmachine: (old-k8s-version-839656)       <target port='0'/>
	I0826 11:59:14.464388  148739 main.go:141] libmachine: (old-k8s-version-839656)     </serial>
	I0826 11:59:14.464402  148739 main.go:141] libmachine: (old-k8s-version-839656)     <console type='pty'>
	I0826 11:59:14.464413  148739 main.go:141] libmachine: (old-k8s-version-839656)       <target type='serial' port='0'/>
	I0826 11:59:14.464434  148739 main.go:141] libmachine: (old-k8s-version-839656)     </console>
	I0826 11:59:14.464446  148739 main.go:141] libmachine: (old-k8s-version-839656)     <rng model='virtio'>
	I0826 11:59:14.464475  148739 main.go:141] libmachine: (old-k8s-version-839656)       <backend model='random'>/dev/random</backend>
	I0826 11:59:14.464503  148739 main.go:141] libmachine: (old-k8s-version-839656)     </rng>
	I0826 11:59:14.464516  148739 main.go:141] libmachine: (old-k8s-version-839656)     
	I0826 11:59:14.464526  148739 main.go:141] libmachine: (old-k8s-version-839656)     
	I0826 11:59:14.464535  148739 main.go:141] libmachine: (old-k8s-version-839656)   </devices>
	I0826 11:59:14.464544  148739 main.go:141] libmachine: (old-k8s-version-839656) </domain>
	I0826 11:59:14.464554  148739 main.go:141] libmachine: (old-k8s-version-839656) 
	I0826 11:59:14.469084  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:a2:0d:78 in network default
	I0826 11:59:14.469748  148739 main.go:141] libmachine: (old-k8s-version-839656) Ensuring networks are active...
	I0826 11:59:14.469769  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:14.470597  148739 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network default is active
	I0826 11:59:14.471075  148739 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network mk-old-k8s-version-839656 is active
	I0826 11:59:14.471613  148739 main.go:141] libmachine: (old-k8s-version-839656) Getting domain xml...
	I0826 11:59:14.472333  148739 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 11:59:15.900859  148739 main.go:141] libmachine: (old-k8s-version-839656) Waiting to get IP...
	I0826 11:59:15.902031  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:15.902610  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:15.902709  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:15.902605  148867 retry.go:31] will retry after 306.7144ms: waiting for machine to come up
	I0826 11:59:16.211508  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:16.212228  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:16.212260  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:16.212166  148867 retry.go:31] will retry after 243.011281ms: waiting for machine to come up
	I0826 11:59:16.457440  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:16.458142  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:16.458179  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:16.458059  148867 retry.go:31] will retry after 482.53788ms: waiting for machine to come up
	I0826 11:59:16.941752  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:16.942244  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:16.942281  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:16.942196  148867 retry.go:31] will retry after 386.513975ms: waiting for machine to come up
	I0826 11:59:17.330977  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:17.331513  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:17.331537  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:17.331481  148867 retry.go:31] will retry after 503.77536ms: waiting for machine to come up
	I0826 11:59:17.837310  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:17.837931  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:17.837968  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:17.837877  148867 retry.go:31] will retry after 740.73817ms: waiting for machine to come up
	I0826 11:59:18.580111  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:18.580700  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:18.580727  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:18.580661  148867 retry.go:31] will retry after 909.418281ms: waiting for machine to come up
	I0826 11:59:19.491397  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:19.492011  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:19.492038  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:19.491968  148867 retry.go:31] will retry after 952.915517ms: waiting for machine to come up
	I0826 11:59:20.446735  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:20.447345  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:20.447374  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:20.447297  148867 retry.go:31] will retry after 1.134965688s: waiting for machine to come up
	I0826 11:59:21.583888  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:21.584440  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:21.584473  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:21.584390  148867 retry.go:31] will retry after 1.919697026s: waiting for machine to come up
	I0826 11:59:23.505517  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:23.506092  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:23.506120  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:23.506044  148867 retry.go:31] will retry after 2.66738007s: waiting for machine to come up
	I0826 11:59:26.176678  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:26.177312  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:26.177339  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:26.177251  148867 retry.go:31] will retry after 2.681608291s: waiting for machine to come up
	I0826 11:59:28.861304  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:28.861872  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:28.861894  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:28.861823  148867 retry.go:31] will retry after 4.373682686s: waiting for machine to come up
	I0826 11:59:33.240846  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:33.241308  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 11:59:33.241332  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 11:59:33.241262  148867 retry.go:31] will retry after 5.414774682s: waiting for machine to come up
	I0826 11:59:38.660855  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:38.661463  148739 main.go:141] libmachine: (old-k8s-version-839656) Found IP for machine: 192.168.72.136
	I0826 11:59:38.661493  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has current primary IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:38.661502  148739 main.go:141] libmachine: (old-k8s-version-839656) Reserving static IP address...
	I0826 11:59:38.662004  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"} in network mk-old-k8s-version-839656
	I0826 11:59:38.744000  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Getting to WaitForSSH function...
	I0826 11:59:38.744038  148739 main.go:141] libmachine: (old-k8s-version-839656) Reserved static IP address: 192.168.72.136
	I0826 11:59:38.744054  148739 main.go:141] libmachine: (old-k8s-version-839656) Waiting for SSH to be available...
	I0826 11:59:38.747094  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:38.747511  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:38.747540  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:38.747626  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH client type: external
	I0826 11:59:38.747655  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa (-rw-------)
	I0826 11:59:38.747687  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 11:59:38.747698  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | About to run SSH command:
	I0826 11:59:38.747711  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | exit 0
	I0826 11:59:38.875645  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | SSH cmd err, output: <nil>: 
	I0826 11:59:38.876065  148739 main.go:141] libmachine: (old-k8s-version-839656) KVM machine creation complete!
	I0826 11:59:38.876447  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 11:59:38.877042  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 11:59:38.877264  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 11:59:38.877467  148739 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 11:59:38.877493  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetState
	I0826 11:59:38.879105  148739 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 11:59:38.879119  148739 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 11:59:38.879130  148739 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 11:59:38.879139  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:38.882434  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:38.883004  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:38.883029  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:38.883224  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:38.883421  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:38.883653  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:38.883912  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:38.884117  148739 main.go:141] libmachine: Using SSH client type: native
	I0826 11:59:38.884373  148739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 11:59:38.884391  148739 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 11:59:38.998023  148739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:59:38.998047  148739 main.go:141] libmachine: Detecting the provisioner...
	I0826 11:59:38.998056  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:39.001237  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.001603  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.001636  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.001828  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:39.002072  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.002281  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.002433  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:39.002628  148739 main.go:141] libmachine: Using SSH client type: native
	I0826 11:59:39.002875  148739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 11:59:39.002894  148739 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 11:59:39.111730  148739 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 11:59:39.111887  148739 main.go:141] libmachine: found compatible host: buildroot
	I0826 11:59:39.111907  148739 main.go:141] libmachine: Provisioning with buildroot...
	I0826 11:59:39.111920  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 11:59:39.112195  148739 buildroot.go:166] provisioning hostname "old-k8s-version-839656"
	I0826 11:59:39.112224  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 11:59:39.112428  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:39.115603  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.115963  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.115987  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.116133  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:39.116320  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.116482  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.116613  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:39.116763  148739 main.go:141] libmachine: Using SSH client type: native
	I0826 11:59:39.116960  148739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 11:59:39.116972  148739 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-839656 && echo "old-k8s-version-839656" | sudo tee /etc/hostname
	I0826 11:59:39.245488  148739 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-839656
	
	I0826 11:59:39.245522  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:39.248494  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.248961  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.248997  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.249195  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:39.249398  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.249585  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.249773  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:39.249965  148739 main.go:141] libmachine: Using SSH client type: native
	I0826 11:59:39.250129  148739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 11:59:39.250145  148739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-839656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-839656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-839656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 11:59:39.368986  148739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 11:59:39.369024  148739 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 11:59:39.369065  148739 buildroot.go:174] setting up certificates
	I0826 11:59:39.369082  148739 provision.go:84] configureAuth start
	I0826 11:59:39.369105  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 11:59:39.369464  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 11:59:39.372566  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.372998  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.373030  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.373236  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:39.376043  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.376499  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.376525  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.376766  148739 provision.go:143] copyHostCerts
	I0826 11:59:39.376832  148739 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 11:59:39.376881  148739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 11:59:39.376979  148739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 11:59:39.377103  148739 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 11:59:39.377115  148739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 11:59:39.377149  148739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 11:59:39.377232  148739 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 11:59:39.377242  148739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 11:59:39.377270  148739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 11:59:39.377343  148739 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-839656 san=[127.0.0.1 192.168.72.136 localhost minikube old-k8s-version-839656]
	I0826 11:59:39.461230  148739 provision.go:177] copyRemoteCerts
	I0826 11:59:39.461289  148739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 11:59:39.461314  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:39.464269  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.464671  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.464705  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.464908  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:39.465142  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.465336  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:39.465516  148739 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 11:59:39.554225  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 11:59:39.581627  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 11:59:39.606008  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0826 11:59:39.629316  148739 provision.go:87] duration metric: took 260.217612ms to configureAuth
	I0826 11:59:39.629359  148739 buildroot.go:189] setting minikube options for container-runtime
	I0826 11:59:39.629546  148739 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 11:59:39.629623  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:39.632566  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.632955  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.632993  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.633206  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:39.633405  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.633555  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.633671  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:39.633849  148739 main.go:141] libmachine: Using SSH client type: native
	I0826 11:59:39.634062  148739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 11:59:39.634085  148739 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 11:59:39.907100  148739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 11:59:39.907134  148739 main.go:141] libmachine: Checking connection to Docker...
	I0826 11:59:39.907152  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetURL
	I0826 11:59:39.908785  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using libvirt version 6000000
	I0826 11:59:39.911374  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.912038  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.912076  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.912267  148739 main.go:141] libmachine: Docker is up and running!
	I0826 11:59:39.912284  148739 main.go:141] libmachine: Reticulating splines...
	I0826 11:59:39.912291  148739 client.go:171] duration metric: took 26.005831234s to LocalClient.Create
	I0826 11:59:39.912317  148739 start.go:167] duration metric: took 26.005894368s to libmachine.API.Create "old-k8s-version-839656"
	I0826 11:59:39.912331  148739 start.go:293] postStartSetup for "old-k8s-version-839656" (driver="kvm2")
	I0826 11:59:39.912343  148739 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 11:59:39.912366  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 11:59:39.912638  148739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 11:59:39.912664  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:39.915869  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.916707  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:39.916738  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:39.916963  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:39.917179  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:39.917402  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:39.917572  148739 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 11:59:40.001394  148739 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 11:59:40.006332  148739 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 11:59:40.006361  148739 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 11:59:40.006437  148739 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 11:59:40.006544  148739 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 11:59:40.006709  148739 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 11:59:40.016594  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:59:40.044164  148739 start.go:296] duration metric: took 131.815832ms for postStartSetup
	I0826 11:59:40.044228  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 11:59:40.044908  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 11:59:40.047911  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.048309  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:40.048357  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.048558  148739 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 11:59:40.048831  148739 start.go:128] duration metric: took 26.164527769s to createHost
	I0826 11:59:40.048857  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:40.051254  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.051630  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:40.051661  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.051850  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:40.052203  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:40.052396  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:40.052620  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:40.052836  148739 main.go:141] libmachine: Using SSH client type: native
	I0826 11:59:40.053052  148739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 11:59:40.053067  148739 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 11:59:40.163377  148739 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724673580.140884793
	
	I0826 11:59:40.163407  148739 fix.go:216] guest clock: 1724673580.140884793
	I0826 11:59:40.163416  148739 fix.go:229] Guest: 2024-08-26 11:59:40.140884793 +0000 UTC Remote: 2024-08-26 11:59:40.048846335 +0000 UTC m=+45.899387764 (delta=92.038458ms)
	I0826 11:59:40.163441  148739 fix.go:200] guest clock delta is within tolerance: 92.038458ms
	I0826 11:59:40.163448  148739 start.go:83] releasing machines lock for "old-k8s-version-839656", held for 26.279347318s
	I0826 11:59:40.163480  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 11:59:40.163818  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 11:59:40.167044  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.167479  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:40.167509  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.167707  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 11:59:40.168275  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 11:59:40.168498  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 11:59:40.168600  148739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 11:59:40.168645  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:40.168751  148739 ssh_runner.go:195] Run: cat /version.json
	I0826 11:59:40.168778  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 11:59:40.171912  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.172201  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.172272  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:40.172300  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.172510  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:40.172628  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:40.172659  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:40.172812  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:40.173012  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 11:59:40.173019  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:40.173205  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 11:59:40.173224  148739 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 11:59:40.173328  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 11:59:40.173509  148739 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 11:59:40.290699  148739 ssh_runner.go:195] Run: systemctl --version
	I0826 11:59:40.297097  148739 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 11:59:40.457052  148739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 11:59:40.463234  148739 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 11:59:40.463323  148739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 11:59:40.481131  148739 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 11:59:40.481167  148739 start.go:495] detecting cgroup driver to use...
	I0826 11:59:40.481256  148739 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 11:59:40.498748  148739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 11:59:40.513606  148739 docker.go:217] disabling cri-docker service (if available) ...
	I0826 11:59:40.513673  148739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 11:59:40.528381  148739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 11:59:40.544125  148739 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 11:59:40.671181  148739 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 11:59:40.840254  148739 docker.go:233] disabling docker service ...
	I0826 11:59:40.840317  148739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 11:59:40.865936  148739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 11:59:40.881060  148739 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 11:59:41.008850  148739 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 11:59:41.152185  148739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 11:59:41.166739  148739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 11:59:41.187201  148739 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 11:59:41.187266  148739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:59:41.198118  148739 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 11:59:41.198194  148739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:59:41.210447  148739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:59:41.221641  148739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 11:59:41.234066  148739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 11:59:41.245872  148739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 11:59:41.256790  148739 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 11:59:41.256863  148739 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 11:59:41.271457  148739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 11:59:41.281531  148739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:59:41.401548  148739 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 11:59:41.564874  148739 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 11:59:41.564956  148739 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 11:59:41.570718  148739 start.go:563] Will wait 60s for crictl version
	I0826 11:59:41.570783  148739 ssh_runner.go:195] Run: which crictl
	I0826 11:59:41.574562  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 11:59:41.619139  148739 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 11:59:41.619236  148739 ssh_runner.go:195] Run: crio --version
	I0826 11:59:41.648043  148739 ssh_runner.go:195] Run: crio --version
	I0826 11:59:41.680140  148739 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 11:59:41.681396  148739 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 11:59:41.684640  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:41.685094  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 12:59:28 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 11:59:41.685132  148739 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 11:59:41.685368  148739 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 11:59:41.690014  148739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:59:41.702597  148739 kubeadm.go:883] updating cluster {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 11:59:41.702738  148739 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 11:59:41.702793  148739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:59:41.737906  148739 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 11:59:41.737989  148739 ssh_runner.go:195] Run: which lz4
	I0826 11:59:41.742277  148739 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 11:59:41.746921  148739 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 11:59:41.746963  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 11:59:43.312881  148739 crio.go:462] duration metric: took 1.570634094s to copy over tarball
	I0826 11:59:43.312972  148739 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 11:59:46.057377  148739 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.744367161s)
	I0826 11:59:46.057418  148739 crio.go:469] duration metric: took 2.744503795s to extract the tarball
	I0826 11:59:46.057430  148739 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 11:59:46.102755  148739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 11:59:46.150349  148739 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 11:59:46.150380  148739 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 11:59:46.150454  148739 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:59:46.150480  148739 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 11:59:46.150491  148739 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:59:46.150506  148739 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:59:46.150546  148739 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 11:59:46.150462  148739 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 11:59:46.150460  148739 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:59:46.150493  148739 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:59:46.152293  148739 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 11:59:46.152368  148739 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:59:46.152381  148739 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:59:46.152385  148739 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:59:46.152368  148739 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 11:59:46.152385  148739 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:59:46.152419  148739 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 11:59:46.152494  148739 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:59:46.390674  148739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 11:59:46.420950  148739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 11:59:46.426440  148739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:59:46.444528  148739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:59:46.445048  148739 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 11:59:46.445095  148739 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 11:59:46.445141  148739 ssh_runner.go:195] Run: which crictl
	I0826 11:59:46.445971  148739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:59:46.450076  148739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 11:59:46.464842  148739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:59:46.534572  148739 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 11:59:46.534622  148739 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 11:59:46.534675  148739 ssh_runner.go:195] Run: which crictl
	I0826 11:59:46.578199  148739 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 11:59:46.578238  148739 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:59:46.578266  148739 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 11:59:46.578279  148739 ssh_runner.go:195] Run: which crictl
	I0826 11:59:46.578287  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 11:59:46.578300  148739 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:59:46.578338  148739 ssh_runner.go:195] Run: which crictl
	I0826 11:59:46.578352  148739 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 11:59:46.578407  148739 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:59:46.578443  148739 ssh_runner.go:195] Run: which crictl
	I0826 11:59:46.602673  148739 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 11:59:46.602729  148739 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 11:59:46.602785  148739 ssh_runner.go:195] Run: which crictl
	I0826 11:59:46.610509  148739 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 11:59:46.610624  148739 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:59:46.610675  148739 ssh_runner.go:195] Run: which crictl
	I0826 11:59:46.610561  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 11:59:46.610579  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:59:46.636960  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:59:46.637010  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 11:59:46.637035  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:59:46.637094  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 11:59:46.676755  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:59:46.676941  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:59:46.764320  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 11:59:46.801521  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 11:59:46.801638  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 11:59:46.801672  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:59:46.801743  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:59:46.840448  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:59:46.840490  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 11:59:46.863294  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 11:59:46.923498  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 11:59:46.972917  148739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 11:59:46.972960  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 11:59:46.976730  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 11:59:46.992274  148739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 11:59:46.992285  148739 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 11:59:46.992525  148739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 11:59:47.017759  148739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 11:59:47.020023  148739 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 11:59:47.072024  148739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 11:59:47.077653  148739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 11:59:47.077740  148739 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 11:59:47.198291  148739 cache_images.go:92] duration metric: took 1.047892119s to LoadCachedImages
	W0826 11:59:47.198416  148739 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0826 11:59:47.198437  148739 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.20.0 crio true true} ...
	I0826 11:59:47.198564  148739 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-839656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 11:59:47.198652  148739 ssh_runner.go:195] Run: crio config
	I0826 11:59:47.247941  148739 cni.go:84] Creating CNI manager for ""
	I0826 11:59:47.247976  148739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 11:59:47.247990  148739 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 11:59:47.248018  148739 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-839656 NodeName:old-k8s-version-839656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 11:59:47.248215  148739 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-839656"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 11:59:47.248299  148739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 11:59:47.259147  148739 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 11:59:47.259235  148739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 11:59:47.269855  148739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0826 11:59:47.288362  148739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 11:59:47.305772  148739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0826 11:59:47.323125  148739 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0826 11:59:47.328102  148739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 11:59:47.342455  148739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 11:59:47.466757  148739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 11:59:47.488351  148739 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656 for IP: 192.168.72.136
	I0826 11:59:47.488378  148739 certs.go:194] generating shared ca certs ...
	I0826 11:59:47.488395  148739 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:59:47.488635  148739 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 11:59:47.488693  148739 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 11:59:47.488707  148739 certs.go:256] generating profile certs ...
	I0826 11:59:47.488777  148739 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key
	I0826 11:59:47.488802  148739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt with IP's: []
	I0826 11:59:47.660478  148739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt ...
	I0826 11:59:47.660531  148739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: {Name:mk7ff28cb8c7a6546a6e017efa1bb3a4718828f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:59:47.660814  148739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key ...
	I0826 11:59:47.660851  148739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key: {Name:mk9ceaebd860cec1526e9c7d651b3e1adc448995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:59:47.661017  148739 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261
	I0826 11:59:47.661053  148739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt.bc731261 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.136]
	I0826 11:59:47.848545  148739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt.bc731261 ...
	I0826 11:59:47.848578  148739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt.bc731261: {Name:mk5f04bd9d3bedb3c82bd327b6b49abdcc7f5ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:59:47.848768  148739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261 ...
	I0826 11:59:47.848791  148739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261: {Name:mk9a55f240988d7fda7d4d5e49d80b2d0b859090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:59:47.848891  148739 certs.go:381] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt.bc731261 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt
	I0826 11:59:47.848986  148739 certs.go:385] copying /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261 -> /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key
	I0826 11:59:47.849064  148739 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key
	I0826 11:59:47.849088  148739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt with IP's: []
	I0826 11:59:48.093649  148739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt ...
	I0826 11:59:48.093687  148739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt: {Name:mk8da095321a9982321c7a69478548e840e755c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:59:48.093864  148739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key ...
	I0826 11:59:48.093879  148739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key: {Name:mkc12b144a4f7e7dfc8d0b0f44b2cd1c2fa09037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 11:59:48.094048  148739 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 11:59:48.094085  148739 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 11:59:48.094095  148739 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 11:59:48.094119  148739 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 11:59:48.094141  148739 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 11:59:48.094162  148739 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 11:59:48.094196  148739 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 11:59:48.094750  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 11:59:48.125499  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 11:59:48.149681  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 11:59:48.176653  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 11:59:48.200520  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 11:59:48.229899  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 11:59:48.334596  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 11:59:48.388045  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 11:59:48.417849  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 11:59:48.447950  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 11:59:48.487800  148739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 11:59:48.511262  148739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 11:59:48.529409  148739 ssh_runner.go:195] Run: openssl version
	I0826 11:59:48.535333  148739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 11:59:48.546712  148739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:59:48.551802  148739 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:59:48.551891  148739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 11:59:48.557886  148739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 11:59:48.570389  148739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 11:59:48.583081  148739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 11:59:48.587687  148739 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 11:59:48.587765  148739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 11:59:48.593528  148739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 11:59:48.605104  148739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 11:59:48.617813  148739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 11:59:48.623076  148739 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 11:59:48.623179  148739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 11:59:48.629542  148739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 11:59:48.641174  148739 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 11:59:48.646423  148739 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0826 11:59:48.646498  148739 kubeadm.go:392] StartCluster: {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:59:48.646701  148739 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 11:59:48.646895  148739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 11:59:48.684854  148739 cri.go:89] found id: ""
	I0826 11:59:48.684947  148739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 11:59:48.698234  148739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 11:59:48.711179  148739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 11:59:48.721740  148739 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 11:59:48.721770  148739 kubeadm.go:157] found existing configuration files:
	
	I0826 11:59:48.721835  148739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 11:59:48.731230  148739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 11:59:48.731309  148739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 11:59:48.742234  148739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 11:59:48.752879  148739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 11:59:48.752978  148739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 11:59:48.769061  148739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 11:59:48.783537  148739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 11:59:48.783622  148739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 11:59:48.794121  148739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 11:59:48.804552  148739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 11:59:48.804622  148739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 11:59:48.815335  148739 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 11:59:48.946183  148739 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 11:59:48.946255  148739 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 11:59:49.109630  148739 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 11:59:49.109781  148739 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 11:59:49.109899  148739 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 11:59:49.360129  148739 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 11:59:49.490720  148739 out.go:235]   - Generating certificates and keys ...
	I0826 11:59:49.490896  148739 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 11:59:49.490997  148739 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 11:59:49.750724  148739 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0826 11:59:49.845753  148739 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0826 11:59:50.151010  148739 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0826 11:59:50.226214  148739 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0826 11:59:50.401053  148739 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0826 11:59:50.401300  148739 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-839656] and IPs [192.168.72.136 127.0.0.1 ::1]
	I0826 11:59:50.490589  148739 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0826 11:59:50.491434  148739 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-839656] and IPs [192.168.72.136 127.0.0.1 ::1]
	I0826 11:59:50.626212  148739 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0826 11:59:50.856439  148739 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0826 11:59:51.017747  148739 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0826 11:59:51.018141  148739 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 11:59:51.232229  148739 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 11:59:51.467917  148739 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 11:59:51.566840  148739 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 11:59:52.091137  148739 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 11:59:52.106973  148739 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 11:59:52.108686  148739 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 11:59:52.108777  148739 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 11:59:52.243008  148739 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 11:59:52.245150  148739 out.go:235]   - Booting up control plane ...
	I0826 11:59:52.245301  148739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 11:59:52.249827  148739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 11:59:52.252045  148739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 11:59:52.252875  148739 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 11:59:52.257653  148739 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:00:32.252079  148739 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:00:32.252758  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:00:32.253052  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:00:37.253183  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:00:37.253478  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:00:47.252801  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:00:47.253068  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:01:07.252575  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:01:07.252805  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:01:47.254244  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:01:47.254518  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:01:47.254543  148739 kubeadm.go:310] 
	I0826 12:01:47.254604  148739 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:01:47.254651  148739 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:01:47.254657  148739 kubeadm.go:310] 
	I0826 12:01:47.254719  148739 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:01:47.254780  148739 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:01:47.254919  148739 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:01:47.254935  148739 kubeadm.go:310] 
	I0826 12:01:47.255078  148739 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:01:47.255121  148739 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:01:47.255171  148739 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:01:47.255179  148739 kubeadm.go:310] 
	I0826 12:01:47.255340  148739 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:01:47.255462  148739 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:01:47.255473  148739 kubeadm.go:310] 
	I0826 12:01:47.255624  148739 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:01:47.255763  148739 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:01:47.255888  148739 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:01:47.255990  148739 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:01:47.256000  148739 kubeadm.go:310] 
	I0826 12:01:47.256622  148739 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:01:47.256735  148739 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:01:47.256814  148739 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 12:01:47.256953  148739 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-839656] and IPs [192.168.72.136 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-839656] and IPs [192.168.72.136 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-839656] and IPs [192.168.72.136 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-839656] and IPs [192.168.72.136 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 12:01:47.257004  148739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:01:48.646875  148739 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389816899s)
	I0826 12:01:48.646996  148739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:01:48.663221  148739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:01:48.674702  148739 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:01:48.674732  148739 kubeadm.go:157] found existing configuration files:
	
	I0826 12:01:48.674806  148739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:01:48.684733  148739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:01:48.684813  148739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:01:48.695133  148739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:01:48.705191  148739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:01:48.705273  148739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:01:48.718880  148739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:01:48.728647  148739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:01:48.728730  148739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:01:48.739491  148739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:01:48.750200  148739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:01:48.750281  148739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:01:48.761003  148739 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:01:48.994818  148739 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:03:45.173175  148739 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:03:45.173299  148739 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 12:03:45.175262  148739 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:03:45.175346  148739 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:03:45.175432  148739 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:03:45.175524  148739 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:03:45.175636  148739 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:03:45.175832  148739 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:03:45.178232  148739 out.go:235]   - Generating certificates and keys ...
	I0826 12:03:45.178335  148739 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:03:45.178418  148739 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:03:45.178531  148739 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:03:45.178629  148739 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:03:45.178725  148739 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:03:45.178812  148739 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:03:45.178912  148739 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:03:45.179001  148739 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:03:45.179108  148739 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:03:45.179212  148739 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:03:45.179283  148739 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:03:45.179381  148739 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:03:45.179449  148739 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:03:45.179496  148739 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:03:45.179581  148739 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:03:45.179660  148739 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:03:45.179837  148739 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:03:45.179943  148739 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:03:45.179994  148739 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:03:45.180110  148739 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:03:45.181693  148739 out.go:235]   - Booting up control plane ...
	I0826 12:03:45.181786  148739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:03:45.181852  148739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:03:45.181908  148739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:03:45.181993  148739 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:03:45.182198  148739 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:03:45.182278  148739 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:03:45.182372  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:03:45.182612  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:03:45.182730  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:03:45.182978  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:03:45.183096  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:03:45.183349  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:03:45.183465  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:03:45.183707  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:03:45.183794  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:03:45.184040  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:03:45.184056  148739 kubeadm.go:310] 
	I0826 12:03:45.184104  148739 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:03:45.184155  148739 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:03:45.184164  148739 kubeadm.go:310] 
	I0826 12:03:45.184210  148739 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:03:45.184253  148739 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:03:45.184376  148739 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:03:45.184392  148739 kubeadm.go:310] 
	I0826 12:03:45.184512  148739 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:03:45.184558  148739 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:03:45.184601  148739 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:03:45.184612  148739 kubeadm.go:310] 
	I0826 12:03:45.184738  148739 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:03:45.184846  148739 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:03:45.184858  148739 kubeadm.go:310] 
	I0826 12:03:45.184984  148739 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:03:45.185090  148739 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:03:45.185185  148739 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:03:45.185274  148739 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:03:45.185363  148739 kubeadm.go:394] duration metric: took 3m56.538870728s to StartCluster
	I0826 12:03:45.185427  148739 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:03:45.185502  148739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:03:45.185585  148739 kubeadm.go:310] 
	I0826 12:03:45.240851  148739 cri.go:89] found id: ""
	I0826 12:03:45.240896  148739 logs.go:276] 0 containers: []
	W0826 12:03:45.240913  148739 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:03:45.240921  148739 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:03:45.240993  148739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:03:45.283139  148739 cri.go:89] found id: ""
	I0826 12:03:45.283172  148739 logs.go:276] 0 containers: []
	W0826 12:03:45.283182  148739 logs.go:278] No container was found matching "etcd"
	I0826 12:03:45.283189  148739 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:03:45.283245  148739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:03:45.326215  148739 cri.go:89] found id: ""
	I0826 12:03:45.326246  148739 logs.go:276] 0 containers: []
	W0826 12:03:45.326258  148739 logs.go:278] No container was found matching "coredns"
	I0826 12:03:45.326265  148739 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:03:45.326331  148739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:03:45.368830  148739 cri.go:89] found id: ""
	I0826 12:03:45.368860  148739 logs.go:276] 0 containers: []
	W0826 12:03:45.368870  148739 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:03:45.368885  148739 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:03:45.368956  148739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:03:45.418534  148739 cri.go:89] found id: ""
	I0826 12:03:45.418611  148739 logs.go:276] 0 containers: []
	W0826 12:03:45.418636  148739 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:03:45.418645  148739 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:03:45.418712  148739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:03:45.460331  148739 cri.go:89] found id: ""
	I0826 12:03:45.460363  148739 logs.go:276] 0 containers: []
	W0826 12:03:45.460375  148739 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:03:45.460384  148739 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:03:45.460451  148739 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:03:45.519295  148739 cri.go:89] found id: ""
	I0826 12:03:45.519332  148739 logs.go:276] 0 containers: []
	W0826 12:03:45.519344  148739 logs.go:278] No container was found matching "kindnet"
	I0826 12:03:45.519358  148739 logs.go:123] Gathering logs for dmesg ...
	I0826 12:03:45.519375  148739 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:03:45.546613  148739 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:03:45.546663  148739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:03:45.729116  148739 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:03:45.729148  148739 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:03:45.729165  148739 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:03:45.848703  148739 logs.go:123] Gathering logs for container status ...
	I0826 12:03:45.848757  148739 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:03:45.897108  148739 logs.go:123] Gathering logs for kubelet ...
	I0826 12:03:45.897148  148739 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0826 12:03:45.956262  148739 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 12:03:45.956383  148739 out.go:270] * 
	* 
	W0826 12:03:45.956446  148739 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:03:45.956461  148739 out.go:270] * 
	* 
	W0826 12:03:45.957246  148739 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:03:45.960244  148739 out.go:201] 
	W0826 12:03:45.961542  148739 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:03:45.961581  148739 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 12:03:45.961602  148739 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 12:03:45.963274  148739 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-839656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 6 (245.266688ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:03:46.262294  151878 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-839656" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (292.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-585941 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-585941 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.115862485s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-585941] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-585941" primary control-plane node in "pause-585941" cluster
	* Updating the running kvm2 "pause-585941" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-585941" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:59:43.857955  149261 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:59:43.858237  149261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:59:43.858247  149261 out.go:358] Setting ErrFile to fd 2...
	I0826 11:59:43.858251  149261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:59:43.858427  149261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:59:43.859120  149261 out.go:352] Setting JSON to false
	I0826 11:59:43.860154  149261 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6125,"bootTime":1724667459,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:59:43.860245  149261 start.go:139] virtualization: kvm guest
	I0826 11:59:43.862536  149261 out.go:177] * [pause-585941] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:59:43.864186  149261 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:59:43.864238  149261 notify.go:220] Checking for updates...
	I0826 11:59:43.867276  149261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:59:43.869000  149261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:59:43.870444  149261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:59:43.871866  149261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:59:43.873548  149261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:59:43.875511  149261 config.go:182] Loaded profile config "pause-585941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:59:43.875932  149261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:59:43.875990  149261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:59:43.896665  149261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I0826 11:59:43.897230  149261 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:59:43.898002  149261 main.go:141] libmachine: Using API Version  1
	I0826 11:59:43.898030  149261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:59:43.898391  149261 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:59:43.898595  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 11:59:43.898982  149261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:59:43.899435  149261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:59:43.899483  149261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:59:43.918268  149261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I0826 11:59:43.919059  149261 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:59:43.919833  149261 main.go:141] libmachine: Using API Version  1
	I0826 11:59:43.919863  149261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:59:43.920314  149261 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:59:43.920573  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 11:59:43.961021  149261 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 11:59:43.962556  149261 start.go:297] selected driver: kvm2
	I0826 11:59:43.962647  149261 start.go:901] validating driver "kvm2" against &{Name:pause-585941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-585941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:59:43.962829  149261 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:59:43.963296  149261 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:59:43.963406  149261 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 11:59:43.983510  149261 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 11:59:43.984439  149261 cni.go:84] Creating CNI manager for ""
	I0826 11:59:43.984456  149261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 11:59:43.984536  149261 start.go:340] cluster config:
	{Name:pause-585941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-585941 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:59:43.984791  149261 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 11:59:43.987732  149261 out.go:177] * Starting "pause-585941" primary control-plane node in "pause-585941" cluster
	I0826 11:59:43.990200  149261 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 11:59:43.990266  149261 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 11:59:43.990277  149261 cache.go:56] Caching tarball of preloaded images
	I0826 11:59:43.990383  149261 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 11:59:43.990395  149261 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 11:59:43.990556  149261 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941/config.json ...
	I0826 11:59:43.990876  149261 start.go:360] acquireMachinesLock for pause-585941: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:00:00.920230  149261 start.go:364] duration metric: took 16.929296756s to acquireMachinesLock for "pause-585941"
	I0826 12:00:00.920311  149261 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:00:00.920326  149261 fix.go:54] fixHost starting: 
	I0826 12:00:00.920803  149261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:00:00.920869  149261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:00:00.939190  149261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0826 12:00:00.939686  149261 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:00:00.940317  149261 main.go:141] libmachine: Using API Version  1
	I0826 12:00:00.940341  149261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:00:00.940716  149261 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:00:00.940900  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 12:00:00.941075  149261 main.go:141] libmachine: (pause-585941) Calling .GetState
	I0826 12:00:00.942730  149261 fix.go:112] recreateIfNeeded on pause-585941: state=Running err=<nil>
	W0826 12:00:00.942756  149261 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:00:00.944954  149261 out.go:177] * Updating the running kvm2 "pause-585941" VM ...
	I0826 12:00:00.946464  149261 machine.go:93] provisionDockerMachine start ...
	I0826 12:00:00.946489  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 12:00:00.946783  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:00.949968  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:00.950437  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:00.950466  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:00.950652  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:00.950869  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:00.951043  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:00.951156  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:00.951312  149261 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:00.951527  149261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0826 12:00:00.951539  149261 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:00:01.067337  149261 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-585941
	
	I0826 12:00:01.067370  149261 main.go:141] libmachine: (pause-585941) Calling .GetMachineName
	I0826 12:00:01.067654  149261 buildroot.go:166] provisioning hostname "pause-585941"
	I0826 12:00:01.067704  149261 main.go:141] libmachine: (pause-585941) Calling .GetMachineName
	I0826 12:00:01.067976  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:01.071065  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.071436  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:01.071469  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.071752  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:01.071967  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:01.072173  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:01.072335  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:01.072501  149261 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:01.072719  149261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0826 12:00:01.072736  149261 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-585941 && echo "pause-585941" | sudo tee /etc/hostname
	I0826 12:00:01.200881  149261 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-585941
	
	I0826 12:00:01.200910  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:01.203817  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.204229  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:01.204263  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.204419  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:01.204625  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:01.204829  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:01.205002  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:01.205200  149261 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:01.205470  149261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0826 12:00:01.205499  149261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-585941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-585941/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-585941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:00:01.324535  149261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:00:01.324578  149261 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:00:01.324646  149261 buildroot.go:174] setting up certificates
	I0826 12:00:01.324660  149261 provision.go:84] configureAuth start
	I0826 12:00:01.324676  149261 main.go:141] libmachine: (pause-585941) Calling .GetMachineName
	I0826 12:00:01.324999  149261 main.go:141] libmachine: (pause-585941) Calling .GetIP
	I0826 12:00:01.328034  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.328423  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:01.328453  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.328697  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:01.331636  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.332032  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:01.332063  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.332264  149261 provision.go:143] copyHostCerts
	I0826 12:00:01.332335  149261 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:00:01.332359  149261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:00:01.332439  149261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:00:01.332560  149261 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:00:01.332574  149261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:00:01.332608  149261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:00:01.332689  149261 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:00:01.332704  149261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:00:01.332732  149261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:00:01.332803  149261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.pause-585941 san=[127.0.0.1 192.168.39.13 localhost minikube pause-585941]
	I0826 12:00:01.480213  149261 provision.go:177] copyRemoteCerts
	I0826 12:00:01.480300  149261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:00:01.480326  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:01.483741  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.484255  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:01.484295  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.484500  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:01.484766  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:01.484973  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:01.485169  149261 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/pause-585941/id_rsa Username:docker}
	I0826 12:00:01.579626  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:00:01.608169  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:00:01.636395  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0826 12:00:01.665855  149261 provision.go:87] duration metric: took 341.175819ms to configureAuth
	I0826 12:00:01.665911  149261 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:00:01.666153  149261 config.go:182] Loaded profile config "pause-585941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:00:01.666246  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:01.669302  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.669634  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:01.669667  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:01.669924  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:01.670138  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:01.670333  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:01.670492  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:01.670699  149261 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:01.670912  149261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0826 12:00:01.670930  149261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:00:08.142352  149261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:00:08.142383  149261 machine.go:96] duration metric: took 7.195901469s to provisionDockerMachine
	I0826 12:00:08.142398  149261 start.go:293] postStartSetup for "pause-585941" (driver="kvm2")
	I0826 12:00:08.142410  149261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:00:08.142433  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 12:00:08.142861  149261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:00:08.142895  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:08.146134  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.146607  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:08.146639  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.146899  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:08.147098  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:08.147319  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:08.147510  149261 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/pause-585941/id_rsa Username:docker}
	I0826 12:00:08.240952  149261 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:00:08.245315  149261 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:00:08.245345  149261 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:00:08.245434  149261 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:00:08.245536  149261 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:00:08.245657  149261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:00:08.255532  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:00:08.280033  149261 start.go:296] duration metric: took 137.620218ms for postStartSetup
	I0826 12:00:08.280082  149261 fix.go:56] duration metric: took 7.359754464s for fixHost
	I0826 12:00:08.280105  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:08.283151  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.283584  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:08.283619  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.283835  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:08.284055  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:08.284210  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:08.284362  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:08.284563  149261 main.go:141] libmachine: Using SSH client type: native
	I0826 12:00:08.284782  149261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0826 12:00:08.284798  149261 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:00:08.404137  149261 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724673608.396786785
	
	I0826 12:00:08.404165  149261 fix.go:216] guest clock: 1724673608.396786785
	I0826 12:00:08.404178  149261 fix.go:229] Guest: 2024-08-26 12:00:08.396786785 +0000 UTC Remote: 2024-08-26 12:00:08.280087148 +0000 UTC m=+24.468465736 (delta=116.699637ms)
	I0826 12:00:08.404235  149261 fix.go:200] guest clock delta is within tolerance: 116.699637ms
	I0826 12:00:08.404245  149261 start.go:83] releasing machines lock for "pause-585941", held for 7.483960656s
	I0826 12:00:08.404276  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 12:00:08.404583  149261 main.go:141] libmachine: (pause-585941) Calling .GetIP
	I0826 12:00:08.407658  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.408029  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:08.408062  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.408379  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 12:00:08.409082  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 12:00:08.409317  149261 main.go:141] libmachine: (pause-585941) Calling .DriverName
	I0826 12:00:08.409429  149261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:00:08.409481  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:08.409628  149261 ssh_runner.go:195] Run: cat /version.json
	I0826 12:00:08.409658  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHHostname
	I0826 12:00:08.412626  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.413078  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:08.413107  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.413128  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.413343  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:08.413588  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:08.413707  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:08.413723  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:08.413734  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:08.413927  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHPort
	I0826 12:00:08.413917  149261 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/pause-585941/id_rsa Username:docker}
	I0826 12:00:08.414095  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHKeyPath
	I0826 12:00:08.414235  149261 main.go:141] libmachine: (pause-585941) Calling .GetSSHUsername
	I0826 12:00:08.414389  149261 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/pause-585941/id_rsa Username:docker}
	I0826 12:00:08.501466  149261 ssh_runner.go:195] Run: systemctl --version
	I0826 12:00:08.532236  149261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:00:08.686131  149261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:00:08.693927  149261 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:00:08.694015  149261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:00:08.704390  149261 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0826 12:00:08.704425  149261 start.go:495] detecting cgroup driver to use...
	I0826 12:00:08.704528  149261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:00:08.721920  149261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:00:08.738235  149261 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:00:08.738315  149261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:00:08.754928  149261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:00:08.770768  149261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:00:08.914079  149261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:00:09.040269  149261 docker.go:233] disabling docker service ...
	I0826 12:00:09.040342  149261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:00:09.056964  149261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:00:09.072958  149261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:00:09.228454  149261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:00:09.375318  149261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:00:09.391219  149261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:00:09.415735  149261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:00:09.415796  149261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:09.431107  149261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:00:09.431178  149261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:09.446792  149261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:09.519191  149261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:09.558851  149261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:00:09.595764  149261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:09.647946  149261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:09.755317  149261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:00:09.873278  149261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:00:09.953994  149261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:00:09.997483  149261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:00:10.323520  149261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:00:10.859346  149261 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:00:10.859495  149261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:00:10.865723  149261 start.go:563] Will wait 60s for crictl version
	I0826 12:00:10.865808  149261 ssh_runner.go:195] Run: which crictl
	I0826 12:00:10.870906  149261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:00:10.915245  149261 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:00:10.915388  149261 ssh_runner.go:195] Run: crio --version
	I0826 12:00:10.948431  149261 ssh_runner.go:195] Run: crio --version
	I0826 12:00:10.989062  149261 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:00:10.990440  149261 main.go:141] libmachine: (pause-585941) Calling .GetIP
	I0826 12:00:10.993797  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:10.994229  149261 main.go:141] libmachine: (pause-585941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:a8:96", ip: ""} in network mk-pause-585941: {Iface:virbr1 ExpiryTime:2024-08-26 12:59:01 +0000 UTC Type:0 Mac:52:54:00:8b:a8:96 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:pause-585941 Clientid:01:52:54:00:8b:a8:96}
	I0826 12:00:10.994260  149261 main.go:141] libmachine: (pause-585941) DBG | domain pause-585941 has defined IP address 192.168.39.13 and MAC address 52:54:00:8b:a8:96 in network mk-pause-585941
	I0826 12:00:10.994526  149261 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 12:00:11.000030  149261 kubeadm.go:883] updating cluster {Name:pause-585941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-585941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:00:11.000215  149261 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:00:11.000282  149261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:00:11.044621  149261 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:00:11.044652  149261 crio.go:433] Images already preloaded, skipping extraction
	I0826 12:00:11.044712  149261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:00:11.081166  149261 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:00:11.081191  149261 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:00:11.081201  149261 kubeadm.go:934] updating node { 192.168.39.13 8443 v1.31.0 crio true true} ...
	I0826 12:00:11.081301  149261 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-585941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-585941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:00:11.081369  149261 ssh_runner.go:195] Run: crio config
	I0826 12:00:11.129324  149261 cni.go:84] Creating CNI manager for ""
	I0826 12:00:11.129353  149261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:00:11.129365  149261 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:00:11.129395  149261 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.13 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-585941 NodeName:pause-585941 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:00:11.129622  149261 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-585941"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:00:11.129705  149261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:00:11.140357  149261 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:00:11.140453  149261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:00:11.152370  149261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0826 12:00:11.171521  149261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:00:11.191047  149261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0826 12:00:11.210786  149261 ssh_runner.go:195] Run: grep 192.168.39.13	control-plane.minikube.internal$ /etc/hosts
	I0826 12:00:11.215262  149261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:00:11.377431  149261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:00:11.393764  149261 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941 for IP: 192.168.39.13
	I0826 12:00:11.393793  149261 certs.go:194] generating shared ca certs ...
	I0826 12:00:11.393808  149261 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:00:11.393981  149261 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:00:11.394019  149261 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:00:11.394028  149261 certs.go:256] generating profile certs ...
	I0826 12:00:11.394101  149261 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941/client.key
	I0826 12:00:11.394162  149261 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941/apiserver.key.f945d550
	I0826 12:00:11.394195  149261 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941/proxy-client.key
	I0826 12:00:11.394305  149261 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:00:11.394333  149261 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:00:11.394343  149261 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:00:11.394362  149261 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:00:11.394385  149261 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:00:11.394407  149261 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:00:11.394442  149261 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:00:11.395191  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:00:11.421538  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:00:11.446809  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:00:11.473997  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:00:11.501168  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0826 12:00:11.582965  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:00:11.737846  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:00:11.837928  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/pause-585941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:00:11.922672  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:00:11.972925  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:00:12.021533  149261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:00:12.059773  149261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:00:12.083741  149261 ssh_runner.go:195] Run: openssl version
	I0826 12:00:12.091720  149261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:00:12.106800  149261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:00:12.111701  149261 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:00:12.111782  149261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:00:12.117622  149261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:00:12.128012  149261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:00:12.139797  149261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:00:12.144828  149261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:00:12.144900  149261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:00:12.151114  149261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:00:12.161774  149261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:00:12.173791  149261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:00:12.178567  149261 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:00:12.178655  149261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:00:12.184713  149261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:00:12.195293  149261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:00:12.200158  149261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:00:12.208412  149261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:00:12.216647  149261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:00:12.223119  149261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:00:12.229637  149261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:00:12.236040  149261 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:00:12.243179  149261 kubeadm.go:392] StartCluster: {Name:pause-585941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-585941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:00:12.243342  149261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:00:12.243443  149261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:00:12.291858  149261 cri.go:89] found id: "0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447"
	I0826 12:00:12.291882  149261 cri.go:89] found id: "0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849"
	I0826 12:00:12.291887  149261 cri.go:89] found id: "35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308"
	I0826 12:00:12.291890  149261 cri.go:89] found id: "385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31"
	I0826 12:00:12.291892  149261 cri.go:89] found id: "e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7"
	I0826 12:00:12.291895  149261 cri.go:89] found id: "cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310"
	I0826 12:00:12.291897  149261 cri.go:89] found id: ""
	I0826 12:00:12.291944  149261 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-585941 -n pause-585941
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-585941 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-585941 logs -n 25: (1.527101981s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo find                           | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo crio                           | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-814705                                     | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	| start   | -p pause-585941 --memory=2048                        | pause-585941              | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:59 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-373568 ssh                              | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-373568 -- sudo                       | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-373568                               | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	| start   | -p old-k8s-version-839656                            | old-k8s-version-839656    | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC | 26 Aug 24 11:59 UTC |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-585941                                      | pause-585941              | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC | 26 Aug 24 12:00 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p no-preload-956479                                 | no-preload-956479         | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:00:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:00:34.917916  149888 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:00:34.918057  149888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:00:34.918067  149888 out.go:358] Setting ErrFile to fd 2...
	I0826 12:00:34.918072  149888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:00:34.918265  149888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:00:34.918890  149888 out.go:352] Setting JSON to false
	I0826 12:00:34.919938  149888 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6176,"bootTime":1724667459,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:00:34.920007  149888 start.go:139] virtualization: kvm guest
	I0826 12:00:34.922396  149888 out.go:177] * [no-preload-956479] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:00:34.923742  149888 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:00:34.923784  149888 notify.go:220] Checking for updates...
	I0826 12:00:34.926484  149888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:00:34.928394  149888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:00:34.930141  149888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:00:34.931697  149888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:00:34.932954  149888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:00:34.934571  149888 config.go:182] Loaded profile config "cert-expiration-156240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:00:34.934723  149888 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:00:34.934892  149888 config.go:182] Loaded profile config "pause-585941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:00:34.935011  149888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:00:34.975986  149888 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 12:00:34.977241  149888 start.go:297] selected driver: kvm2
	I0826 12:00:34.977260  149888 start.go:901] validating driver "kvm2" against <nil>
	I0826 12:00:34.977272  149888 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:00:34.979400  149888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:34.979566  149888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:00:34.997137  149888 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:00:34.997212  149888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 12:00:34.997427  149888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:00:34.997477  149888 cni.go:84] Creating CNI manager for ""
	I0826 12:00:34.997483  149888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:00:34.997491  149888 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 12:00:34.997550  149888 start.go:340] cluster config:
	{Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:00:34.997646  149888 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:34.999713  149888 out.go:177] * Starting "no-preload-956479" primary control-plane node in "no-preload-956479" cluster
	I0826 12:00:35.001509  149888 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:00:35.001652  149888 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:00:35.001686  149888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json: {Name:mkcb43e7080c8c9ca9a5c07a906b057481d174e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:00:35.001842  149888 cache.go:107] acquiring lock: {Name:mk1767efba407c891118d7c821e3766818b7f843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.001886  149888 cache.go:107] acquiring lock: {Name:mk143974065fe17d9bc5e80e27b4fccf9752f01e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.001916  149888 cache.go:107] acquiring lock: {Name:mk6f96da167d35c2f7e8d32d0ae0e8f8487dd7e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.001938  149888 cache.go:115] /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0826 12:00:35.001952  149888 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.082µs
	I0826 12:00:35.001962  149888 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0826 12:00:35.001953  149888 cache.go:107] acquiring lock: {Name:mka852d88e85aeae41fd5cb176959eb9c0506fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.001872  149888 start.go:360] acquireMachinesLock for no-preload-956479: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:00:35.001901  149888 cache.go:107] acquiring lock: {Name:mk92f70fd16cb7db6c86868c8d71d0b29e90c59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.002019  149888 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:00:35.002016  149888 cache.go:107] acquiring lock: {Name:mk49b15249a7e160c500977bbbe9a8f502e81573 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.002019  149888 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:00:35.002094  149888 start.go:364] duration metric: took 59.686µs to acquireMachinesLock for "no-preload-956479"
	I0826 12:00:35.002116  149888 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 12:00:35.001829  149888 cache.go:107] acquiring lock: {Name:mk5d40ed405db0ae63d7e065a4c18ccfefae4113 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.002118  149888 start.go:93] Provisioning new machine with config: &{Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:00:35.002216  149888 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 12:00:34.027196  149261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:00:34.049798  149261 node_ready.go:35] waiting up to 6m0s for node "pause-585941" to be "Ready" ...
	I0826 12:00:34.054057  149261 node_ready.go:49] node "pause-585941" has status "Ready":"True"
	I0826 12:00:34.054098  149261 node_ready.go:38] duration metric: took 4.251976ms for node "pause-585941" to be "Ready" ...
	I0826 12:00:34.054111  149261 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:00:34.061567  149261 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mrsqd" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.067482  149261 pod_ready.go:93] pod "coredns-6f6b679f8f-mrsqd" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:34.067506  149261 pod_ready.go:82] duration metric: took 5.909112ms for pod "coredns-6f6b679f8f-mrsqd" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.067516  149261 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.447179  149261 pod_ready.go:93] pod "etcd-pause-585941" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:34.447211  149261 pod_ready.go:82] duration metric: took 379.687173ms for pod "etcd-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.447226  149261 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.846805  149261 pod_ready.go:93] pod "kube-apiserver-pause-585941" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:34.846859  149261 pod_ready.go:82] duration metric: took 399.623625ms for pod "kube-apiserver-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.846875  149261 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:35.248359  149261 pod_ready.go:93] pod "kube-controller-manager-pause-585941" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:35.248395  149261 pod_ready.go:82] duration metric: took 401.510097ms for pod "kube-controller-manager-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:35.248410  149261 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-shqfk" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:35.647416  149261 pod_ready.go:93] pod "kube-proxy-shqfk" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:35.647445  149261 pod_ready.go:82] duration metric: took 399.027474ms for pod "kube-proxy-shqfk" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:35.647456  149261 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:36.049888  149261 pod_ready.go:93] pod "kube-scheduler-pause-585941" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:36.049914  149261 pod_ready.go:82] duration metric: took 402.45112ms for pod "kube-scheduler-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:36.049923  149261 pod_ready.go:39] duration metric: took 1.995800352s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:00:36.049944  149261 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:00:36.049997  149261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:00:36.064434  149261 api_server.go:72] duration metric: took 2.247026675s to wait for apiserver process to appear ...
	I0826 12:00:36.064469  149261 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:00:36.064492  149261 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0826 12:00:36.069527  149261 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0826 12:00:36.070439  149261 api_server.go:141] control plane version: v1.31.0
	I0826 12:00:36.070461  149261 api_server.go:131] duration metric: took 5.98451ms to wait for apiserver health ...
	I0826 12:00:36.070472  149261 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:00:36.249831  149261 system_pods.go:59] 6 kube-system pods found
	I0826 12:00:36.249865  149261 system_pods.go:61] "coredns-6f6b679f8f-mrsqd" [87d3bb7c-c342-4e1d-a968-4bce3cffcd28] Running
	I0826 12:00:36.249875  149261 system_pods.go:61] "etcd-pause-585941" [7b3e42bb-dfb8-4e1c-a207-58ad5b4db4a5] Running
	I0826 12:00:36.249879  149261 system_pods.go:61] "kube-apiserver-pause-585941" [d87291db-5d08-4821-b0ef-8c69ad30903a] Running
	I0826 12:00:36.249885  149261 system_pods.go:61] "kube-controller-manager-pause-585941" [847f5f6f-0015-4dd3-a8c5-226b5f766d47] Running
	I0826 12:00:36.249891  149261 system_pods.go:61] "kube-proxy-shqfk" [78f3c9d3-c561-4dc3-b495-19ef43f0d35f] Running
	I0826 12:00:36.249896  149261 system_pods.go:61] "kube-scheduler-pause-585941" [11ab98fe-0037-44ff-b5dc-93bf9609bfee] Running
	I0826 12:00:36.249904  149261 system_pods.go:74] duration metric: took 179.425091ms to wait for pod list to return data ...
	I0826 12:00:36.249913  149261 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:00:36.448455  149261 default_sa.go:45] found service account: "default"
	I0826 12:00:36.448489  149261 default_sa.go:55] duration metric: took 198.568325ms for default service account to be created ...
	I0826 12:00:36.448502  149261 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:00:36.650172  149261 system_pods.go:86] 6 kube-system pods found
	I0826 12:00:36.650205  149261 system_pods.go:89] "coredns-6f6b679f8f-mrsqd" [87d3bb7c-c342-4e1d-a968-4bce3cffcd28] Running
	I0826 12:00:36.650210  149261 system_pods.go:89] "etcd-pause-585941" [7b3e42bb-dfb8-4e1c-a207-58ad5b4db4a5] Running
	I0826 12:00:36.650215  149261 system_pods.go:89] "kube-apiserver-pause-585941" [d87291db-5d08-4821-b0ef-8c69ad30903a] Running
	I0826 12:00:36.650219  149261 system_pods.go:89] "kube-controller-manager-pause-585941" [847f5f6f-0015-4dd3-a8c5-226b5f766d47] Running
	I0826 12:00:36.650222  149261 system_pods.go:89] "kube-proxy-shqfk" [78f3c9d3-c561-4dc3-b495-19ef43f0d35f] Running
	I0826 12:00:36.650226  149261 system_pods.go:89] "kube-scheduler-pause-585941" [11ab98fe-0037-44ff-b5dc-93bf9609bfee] Running
	I0826 12:00:36.650232  149261 system_pods.go:126] duration metric: took 201.72492ms to wait for k8s-apps to be running ...
	I0826 12:00:36.650238  149261 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:00:36.650282  149261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:00:36.669362  149261 system_svc.go:56] duration metric: took 19.109951ms WaitForService to wait for kubelet
	I0826 12:00:36.669398  149261 kubeadm.go:582] duration metric: took 2.851999188s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:00:36.669422  149261 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:00:36.848225  149261 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:00:36.848251  149261 node_conditions.go:123] node cpu capacity is 2
	I0826 12:00:36.848261  149261 node_conditions.go:105] duration metric: took 178.833653ms to run NodePressure ...
	I0826 12:00:36.848274  149261 start.go:241] waiting for startup goroutines ...
	I0826 12:00:36.848280  149261 start.go:246] waiting for cluster config update ...
	I0826 12:00:36.848288  149261 start.go:255] writing updated cluster config ...
	I0826 12:00:36.848638  149261 ssh_runner.go:195] Run: rm -f paused
	I0826 12:00:36.904551  149261 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:00:36.907425  149261 out.go:177] * Done! kubectl is now configured to use "pause-585941" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.607599355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673637607567835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce0eb81c-b5cf-4999-980d-ba4b73911f24 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.608261592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=376374a3-3832-419b-be01-ca70bc639bb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.608379356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=376374a3-3832-419b-be01-ca70bc639bb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.608637512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4,PodSandboxId:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724673619555274726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d,PodSandboxId:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724673619150734950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8,PodSandboxId:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673614320141993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062a
b4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089,PodSandboxId:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673614324706174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96
459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645,PodSandboxId:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673614293679023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff4
3a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f,PodSandboxId:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673614286479684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447,PodSandboxId:d13d0acc47191f6eba05d2332cd065a9cd912931909cc85629d7fa873c776100,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673610058575599,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062ab4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849,PodSandboxId:78f3b45b5487c394d458fbfc8a148e1514e00c8a9d0549b94d123d0affc42e04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673610008785449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31,PodSandboxId:4c931acef42430515f9fc78e3f03d150fc2276c8c74a361548cf24496dbf219d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724673609902996676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308,PodSandboxId:36e9ce0ad42a1f21eaecaed3611f49d2823ff23492688a3eca049f5e9534e434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673609921164813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff43a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7,PodSandboxId:09814b5523e09d788dd8952d0787528c992736debd71bfb3345611bf513b0c11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673609828354550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310,PodSandboxId:607679c23203cfb0d1a11ec906713ca7e5803b05fc2e6d6699ab44a39271dbf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724673575916221993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=376374a3-3832-419b-be01-ca70bc639bb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.654810325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=772dda68-b16a-4d51-bae6-105d928f9cc9 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.655026979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=772dda68-b16a-4d51-bae6-105d928f9cc9 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.656256628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86beeafe-cf0f-456a-8a7b-db79e9358a6c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.656636708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673637656612173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86beeafe-cf0f-456a-8a7b-db79e9358a6c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.657149075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f203c974-2849-4e7b-af66-f7fcf29cd021 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.657220487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f203c974-2849-4e7b-af66-f7fcf29cd021 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.657478400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4,PodSandboxId:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724673619555274726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d,PodSandboxId:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724673619150734950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8,PodSandboxId:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673614320141993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062a
b4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089,PodSandboxId:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673614324706174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96
459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645,PodSandboxId:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673614293679023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff4
3a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f,PodSandboxId:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673614286479684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447,PodSandboxId:d13d0acc47191f6eba05d2332cd065a9cd912931909cc85629d7fa873c776100,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673610058575599,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062ab4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849,PodSandboxId:78f3b45b5487c394d458fbfc8a148e1514e00c8a9d0549b94d123d0affc42e04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673610008785449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31,PodSandboxId:4c931acef42430515f9fc78e3f03d150fc2276c8c74a361548cf24496dbf219d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724673609902996676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308,PodSandboxId:36e9ce0ad42a1f21eaecaed3611f49d2823ff23492688a3eca049f5e9534e434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673609921164813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff43a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7,PodSandboxId:09814b5523e09d788dd8952d0787528c992736debd71bfb3345611bf513b0c11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673609828354550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310,PodSandboxId:607679c23203cfb0d1a11ec906713ca7e5803b05fc2e6d6699ab44a39271dbf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724673575916221993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f203c974-2849-4e7b-af66-f7fcf29cd021 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.713916750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed7b611d-c2d2-406a-b831-3aebb59a402c name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.713999579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed7b611d-c2d2-406a-b831-3aebb59a402c name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.715690091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdee3861-e957-447f-9091-ee0d716fdbb5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.716112149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673637716086990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdee3861-e957-447f-9091-ee0d716fdbb5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.716675973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7ebcb96-857d-4af7-9162-1af75110ed58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.716734542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7ebcb96-857d-4af7-9162-1af75110ed58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.717182983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4,PodSandboxId:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724673619555274726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d,PodSandboxId:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724673619150734950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8,PodSandboxId:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673614320141993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062a
b4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089,PodSandboxId:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673614324706174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96
459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645,PodSandboxId:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673614293679023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff4
3a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f,PodSandboxId:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673614286479684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447,PodSandboxId:d13d0acc47191f6eba05d2332cd065a9cd912931909cc85629d7fa873c776100,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673610058575599,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062ab4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849,PodSandboxId:78f3b45b5487c394d458fbfc8a148e1514e00c8a9d0549b94d123d0affc42e04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673610008785449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31,PodSandboxId:4c931acef42430515f9fc78e3f03d150fc2276c8c74a361548cf24496dbf219d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724673609902996676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308,PodSandboxId:36e9ce0ad42a1f21eaecaed3611f49d2823ff23492688a3eca049f5e9534e434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673609921164813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff43a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7,PodSandboxId:09814b5523e09d788dd8952d0787528c992736debd71bfb3345611bf513b0c11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673609828354550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310,PodSandboxId:607679c23203cfb0d1a11ec906713ca7e5803b05fc2e6d6699ab44a39271dbf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724673575916221993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7ebcb96-857d-4af7-9162-1af75110ed58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.762012113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d26d14f-4897-4f8b-899b-f4fddc1bd508 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.762151631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d26d14f-4897-4f8b-899b-f4fddc1bd508 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.764035479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2e8bb70-41e4-4cdc-a477-5f5b2eb1921a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.764425292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673637764400945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2e8bb70-41e4-4cdc-a477-5f5b2eb1921a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.764955705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fb19db6-55f4-4e29-a8e6-7c0732222af8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.765045985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fb19db6-55f4-4e29-a8e6-7c0732222af8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:37 pause-585941 crio[2644]: time="2024-08-26 12:00:37.765355816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4,PodSandboxId:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724673619555274726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d,PodSandboxId:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724673619150734950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8,PodSandboxId:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673614320141993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062a
b4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089,PodSandboxId:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673614324706174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96
459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645,PodSandboxId:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673614293679023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff4
3a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f,PodSandboxId:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673614286479684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447,PodSandboxId:d13d0acc47191f6eba05d2332cd065a9cd912931909cc85629d7fa873c776100,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673610058575599,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062ab4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849,PodSandboxId:78f3b45b5487c394d458fbfc8a148e1514e00c8a9d0549b94d123d0affc42e04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673610008785449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31,PodSandboxId:4c931acef42430515f9fc78e3f03d150fc2276c8c74a361548cf24496dbf219d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724673609902996676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308,PodSandboxId:36e9ce0ad42a1f21eaecaed3611f49d2823ff23492688a3eca049f5e9534e434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673609921164813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff43a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7,PodSandboxId:09814b5523e09d788dd8952d0787528c992736debd71bfb3345611bf513b0c11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673609828354550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310,PodSandboxId:607679c23203cfb0d1a11ec906713ca7e5803b05fc2e6d6699ab44a39271dbf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724673575916221993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fb19db6-55f4-4e29-a8e6-7c0732222af8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c094dee5b14a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago       Running             coredns                   1                   2c9952f7bd610       coredns-6f6b679f8f-mrsqd
	4ebdd5fc6db6b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   18 seconds ago       Running             kube-proxy                2                   1ce2c4cdf77c1       kube-proxy-shqfk
	93206164dd2af       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   23 seconds ago       Running             kube-controller-manager   2                   6cf625437914a       kube-controller-manager-pause-585941
	ab77ab688e7cd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   23 seconds ago       Running             kube-scheduler            2                   6aa452da73c08       kube-scheduler-pause-585941
	97fbdef95de43       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 seconds ago       Running             kube-apiserver            2                   e0754514a1af7       kube-apiserver-pause-585941
	ef1d0dd05ae5d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago       Running             etcd                      2                   d7dcbe5d5847b       etcd-pause-585941
	0f82b47d9fac5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   27 seconds ago       Exited              kube-scheduler            1                   d13d0acc47191       kube-scheduler-pause-585941
	0dadcb969f785       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   27 seconds ago       Exited              kube-controller-manager   1                   78f3b45b5487c       kube-controller-manager-pause-585941
	35036f2f4de24       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   27 seconds ago       Exited              kube-apiserver            1                   36e9ce0ad42a1       kube-apiserver-pause-585941
	385b96b79bf94       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   27 seconds ago       Exited              kube-proxy                1                   4c931acef4243       kube-proxy-shqfk
	e7a660152d69a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   28 seconds ago       Exited              etcd                      1                   09814b5523e09       etcd-pause-585941
	cd61163464af7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   607679c23203c       coredns-6f6b679f8f-mrsqd
	
	
	==> coredns [c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57429 - 52252 "HINFO IN 2039932744837135984.6481832000090125629. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011029981s
	
	
	==> coredns [cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57574 - 1926 "HINFO IN 7009155030966930451.2391654659985760990. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012717476s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-585941
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-585941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=pause-585941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_59_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-585941
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:00:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:00:18 +0000   Mon, 26 Aug 2024 11:59:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:00:18 +0000   Mon, 26 Aug 2024 11:59:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:00:18 +0000   Mon, 26 Aug 2024 11:59:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:00:18 +0000   Mon, 26 Aug 2024 11:59:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    pause-585941
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff021fd9adab4575aadab910f3963079
	  System UUID:                ff021fd9-adab-4575-aada-b910f3963079
	  Boot ID:                    38aad1a3-799b-43dc-822f-9b375e5ed885
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-mrsqd                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     63s
	  kube-system                 etcd-pause-585941                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         69s
	  kube-system                 kube-apiserver-pause-585941             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-pause-585941    200m (10%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-proxy-shqfk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-scheduler-pause-585941             100m (5%)     0 (0%)      0 (0%)           0 (0%)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     68s                kubelet          Node pause-585941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  68s                kubelet          Node pause-585941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s                kubelet          Node pause-585941 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeReady                67s                kubelet          Node pause-585941 status is now: NodeReady
	  Normal  RegisteredNode           64s                node-controller  Node pause-585941 event: Registered Node pause-585941 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-585941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-585941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-585941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-585941 event: Registered Node pause-585941 in Controller
	
	
	==> dmesg <==
	[ +11.420830] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.063785] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054376] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.176532] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.158713] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.278199] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.234799] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.170584] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.066438] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.499794] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.077017] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.783550] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.698878] kauditd_printk_skb: 43 callbacks suppressed
	[Aug26 12:00] systemd-fstab-generator[2025]: Ignoring "noauto" option for root device
	[  +0.077967] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.053600] systemd-fstab-generator[2037]: Ignoring "noauto" option for root device
	[  +0.169206] systemd-fstab-generator[2051]: Ignoring "noauto" option for root device
	[  +0.147773] systemd-fstab-generator[2064]: Ignoring "noauto" option for root device
	[  +0.892394] systemd-fstab-generator[2405]: Ignoring "noauto" option for root device
	[  +1.103686] systemd-fstab-generator[2738]: Ignoring "noauto" option for root device
	[  +2.292054] systemd-fstab-generator[3038]: Ignoring "noauto" option for root device
	[  +0.347222] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.312387] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.688385] kauditd_printk_skb: 16 callbacks suppressed
	[  +4.977335] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	
	
	==> etcd [e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7] <==
	
	
	==> etcd [ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f] <==
	{"level":"info","ts":"2024-08-26T12:00:14.652022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","added-peer-id":"1d3fba3e6c6ecbcd","added-peer-peer-urls":["https://192.168.39.13:2380"]}
	{"level":"info","ts":"2024-08-26T12:00:14.652229Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:00:14.652282Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:00:14.655274Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:14.657391Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-08-26T12:00:14.657447Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-08-26T12:00:14.657175Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T12:00:14.657639Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1d3fba3e6c6ecbcd","initial-advertise-peer-urls":["https://192.168.39.13:2380"],"listen-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T12:00:14.657673Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T12:00:16.420427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:16.420502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:16.420552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd received MsgPreVoteResp from 1d3fba3e6c6ecbcd at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:16.420581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became candidate at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:16.420589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd received MsgVoteResp from 1d3fba3e6c6ecbcd at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:16.420602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became leader at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:16.420613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d3fba3e6c6ecbcd elected leader 1d3fba3e6c6ecbcd at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:16.426157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:00:16.426400Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:00:16.426775Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T12:00:16.426810Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T12:00:16.426153Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1d3fba3e6c6ecbcd","local-member-attributes":"{Name:pause-585941 ClientURLs:[https://192.168.39.13:2379]}","request-path":"/0/members/1d3fba3e6c6ecbcd/attributes","cluster-id":"1e01947a35a5ac2c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T12:00:16.427902Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:16.428031Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:16.429197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T12:00:16.429197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.13:2379"}
	
	
	==> kernel <==
	 12:00:38 up 1 min,  0 users,  load average: 1.15, 0.47, 0.17
	Linux pause-585941 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308] <==
	
	
	==> kube-apiserver [97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645] <==
	I0826 12:00:17.924369       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 12:00:17.931726       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0826 12:00:17.931987       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0826 12:00:17.932217       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0826 12:00:17.932265       1 shared_informer.go:320] Caches are synced for configmaps
	I0826 12:00:17.932320       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0826 12:00:17.932345       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0826 12:00:17.961402       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0826 12:00:17.962330       1 aggregator.go:171] initial CRD sync complete...
	I0826 12:00:17.962376       1 autoregister_controller.go:144] Starting autoregister controller
	I0826 12:00:17.962383       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0826 12:00:17.962390       1 cache.go:39] Caches are synced for autoregister controller
	E0826 12:00:17.966478       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0826 12:00:17.968605       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0826 12:00:17.983319       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 12:00:17.983415       1 policy_source.go:224] refreshing policies
	I0826 12:00:18.044890       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0826 12:00:18.837522       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0826 12:00:20.070466       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0826 12:00:20.089554       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0826 12:00:20.158550       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0826 12:00:20.195088       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0826 12:00:20.205173       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0826 12:00:21.497454       1 controller.go:615] quota admission added evaluator for: endpoints
	I0826 12:00:21.649402       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849] <==
	
	
	==> kube-controller-manager [93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089] <==
	I0826 12:00:21.243547       1 shared_informer.go:320] Caches are synced for ephemeral
	I0826 12:00:21.243599       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0826 12:00:21.243620       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0826 12:00:21.244952       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0826 12:00:21.245120       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0826 12:00:21.245181       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0826 12:00:21.245273       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0826 12:00:21.245440       1 shared_informer.go:320] Caches are synced for service account
	I0826 12:00:21.245514       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0826 12:00:21.253101       1 shared_informer.go:320] Caches are synced for HPA
	I0826 12:00:21.253144       1 shared_informer.go:320] Caches are synced for namespace
	I0826 12:00:21.307251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="128.922427ms"
	I0826 12:00:21.307700       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="52.163µs"
	I0826 12:00:21.316387       1 shared_informer.go:320] Caches are synced for attach detach
	I0826 12:00:21.428835       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0826 12:00:21.444476       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0826 12:00:21.456980       1 shared_informer.go:320] Caches are synced for resource quota
	I0826 12:00:21.474461       1 shared_informer.go:320] Caches are synced for resource quota
	I0826 12:00:21.487050       1 shared_informer.go:320] Caches are synced for disruption
	I0826 12:00:21.493818       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0826 12:00:21.875627       1 shared_informer.go:320] Caches are synced for garbage collector
	I0826 12:00:21.926433       1 shared_informer.go:320] Caches are synced for garbage collector
	I0826 12:00:21.926527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0826 12:00:28.898568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="14.477644ms"
	I0826 12:00:28.899054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="41.804µs"
	
	
	==> kube-proxy [385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31] <==
	
	
	==> kube-proxy [4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 12:00:19.460475       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 12:00:19.478567       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
	E0826 12:00:19.478664       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 12:00:19.540288       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 12:00:19.540342       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 12:00:19.540367       1 server_linux.go:169] "Using iptables Proxier"
	I0826 12:00:19.543290       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 12:00:19.543526       1 server.go:483] "Version info" version="v1.31.0"
	I0826 12:00:19.543560       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:00:19.545190       1 config.go:197] "Starting service config controller"
	I0826 12:00:19.545229       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 12:00:19.545251       1 config.go:104] "Starting endpoint slice config controller"
	I0826 12:00:19.545255       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 12:00:19.545761       1 config.go:326] "Starting node config controller"
	I0826 12:00:19.545785       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 12:00:19.646800       1 shared_informer.go:320] Caches are synced for service config
	I0826 12:00:19.646782       1 shared_informer.go:320] Caches are synced for node config
	I0826 12:00:19.646830       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447] <==
	
	
	==> kube-scheduler [ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8] <==
	I0826 12:00:15.270817       1 serving.go:386] Generated self-signed cert in-memory
	W0826 12:00:17.880766       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0826 12:00:17.880819       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 12:00:17.880833       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0826 12:00:17.880878       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0826 12:00:17.959156       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0826 12:00:17.959196       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:00:17.964749       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0826 12:00:17.964984       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0826 12:00:17.965033       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 12:00:17.965066       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0826 12:00:18.065560       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:00:14 pause-585941 kubelet[3045]: E0826 12:00:14.189412    3045 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.13:8443: connect: connection refused" node="pause-585941"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.266976    3045 scope.go:117] "RemoveContainer" containerID="e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.268097    3045 scope.go:117] "RemoveContainer" containerID="35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.269782    3045 scope.go:117] "RemoveContainer" containerID="0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.272949    3045 scope.go:117] "RemoveContainer" containerID="0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: E0826 12:00:14.425479    3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-585941?timeout=10s\": dial tcp 192.168.39.13:8443: connect: connection refused" interval="800ms"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.591592    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-585941"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: E0826 12:00:14.592420    3045 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.13:8443: connect: connection refused" node="pause-585941"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: W0826 12:00:14.606540    3045 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-585941&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	Aug 26 12:00:14 pause-585941 kubelet[3045]: E0826 12:00:14.606624    3045 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-585941&limit=500&resourceVersion=0\": dial tcp 192.168.39.13:8443: connect: connection refused" logger="UnhandledError"
	Aug 26 12:00:15 pause-585941 kubelet[3045]: I0826 12:00:15.394321    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-585941"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.065110    3045 kubelet_node_status.go:111] "Node was previously registered" node="pause-585941"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.065250    3045 kubelet_node_status.go:75] "Successfully registered node" node="pause-585941"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.065290    3045 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.066662    3045 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.794171    3045 apiserver.go:52] "Watching apiserver"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.814233    3045 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.905243    3045 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78f3c9d3-c561-4dc3-b495-19ef43f0d35f-xtables-lock\") pod \"kube-proxy-shqfk\" (UID: \"78f3c9d3-c561-4dc3-b495-19ef43f0d35f\") " pod="kube-system/kube-proxy-shqfk"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.905441    3045 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78f3c9d3-c561-4dc3-b495-19ef43f0d35f-lib-modules\") pod \"kube-proxy-shqfk\" (UID: \"78f3c9d3-c561-4dc3-b495-19ef43f0d35f\") " pod="kube-system/kube-proxy-shqfk"
	Aug 26 12:00:19 pause-585941 kubelet[3045]: I0826 12:00:19.098916    3045 scope.go:117] "RemoveContainer" containerID="385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31"
	Aug 26 12:00:23 pause-585941 kubelet[3045]: E0826 12:00:23.886286    3045 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673623885949756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:00:23 pause-585941 kubelet[3045]: E0826 12:00:23.886582    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673623885949756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:00:28 pause-585941 kubelet[3045]: I0826 12:00:28.869502    3045 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 26 12:00:33 pause-585941 kubelet[3045]: E0826 12:00:33.888771    3045 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673633888207823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:00:33 pause-585941 kubelet[3045]: E0826 12:00:33.888800    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673633888207823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-585941 -n pause-585941
helpers_test.go:261: (dbg) Run:  kubectl --context pause-585941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-585941 -n pause-585941
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-585941 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-585941 logs -n 25: (1.35715987s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo cat                            | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo                                | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo find                           | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-814705 sudo crio                           | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-814705                                     | cilium-814705             | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	| start   | -p pause-585941 --memory=2048                        | pause-585941              | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:59 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-373568 ssh                              | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-373568 -- sudo                       | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-373568                               | cert-options-373568       | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC | 26 Aug 24 11:58 UTC |
	| start   | -p old-k8s-version-839656                            | old-k8s-version-839656    | jenkins | v1.33.1 | 26 Aug 24 11:58 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC | 26 Aug 24 11:59 UTC |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-585941                                      | pause-585941              | jenkins | v1.33.1 | 26 Aug 24 11:59 UTC | 26 Aug 24 12:00 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-117510                         | kubernetes-upgrade-117510 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p no-preload-956479                                 | no-preload-956479         | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:00:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:00:34.917916  149888 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:00:34.918057  149888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:00:34.918067  149888 out.go:358] Setting ErrFile to fd 2...
	I0826 12:00:34.918072  149888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:00:34.918265  149888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:00:34.918890  149888 out.go:352] Setting JSON to false
	I0826 12:00:34.919938  149888 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6176,"bootTime":1724667459,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:00:34.920007  149888 start.go:139] virtualization: kvm guest
	I0826 12:00:34.922396  149888 out.go:177] * [no-preload-956479] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:00:34.923742  149888 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:00:34.923784  149888 notify.go:220] Checking for updates...
	I0826 12:00:34.926484  149888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:00:34.928394  149888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:00:34.930141  149888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:00:34.931697  149888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:00:34.932954  149888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:00:34.934571  149888 config.go:182] Loaded profile config "cert-expiration-156240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:00:34.934723  149888 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:00:34.934892  149888 config.go:182] Loaded profile config "pause-585941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:00:34.935011  149888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:00:34.975986  149888 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 12:00:34.977241  149888 start.go:297] selected driver: kvm2
	I0826 12:00:34.977260  149888 start.go:901] validating driver "kvm2" against <nil>
	I0826 12:00:34.977272  149888 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:00:34.979400  149888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:34.979566  149888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:00:34.997137  149888 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:00:34.997212  149888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 12:00:34.997427  149888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:00:34.997477  149888 cni.go:84] Creating CNI manager for ""
	I0826 12:00:34.997483  149888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:00:34.997491  149888 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 12:00:34.997550  149888 start.go:340] cluster config:
	{Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:00:34.997646  149888 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:34.999713  149888 out.go:177] * Starting "no-preload-956479" primary control-plane node in "no-preload-956479" cluster
	I0826 12:00:35.001509  149888 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:00:35.001652  149888 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:00:35.001686  149888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json: {Name:mkcb43e7080c8c9ca9a5c07a906b057481d174e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:00:35.001842  149888 cache.go:107] acquiring lock: {Name:mk1767efba407c891118d7c821e3766818b7f843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.001886  149888 cache.go:107] acquiring lock: {Name:mk143974065fe17d9bc5e80e27b4fccf9752f01e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.001916  149888 cache.go:107] acquiring lock: {Name:mk6f96da167d35c2f7e8d32d0ae0e8f8487dd7e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.001938  149888 cache.go:115] /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0826 12:00:35.001952  149888 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.082µs
	I0826 12:00:35.001962  149888 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0826 12:00:35.001953  149888 cache.go:107] acquiring lock: {Name:mka852d88e85aeae41fd5cb176959eb9c0506fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.001872  149888 start.go:360] acquireMachinesLock for no-preload-956479: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:00:35.001901  149888 cache.go:107] acquiring lock: {Name:mk92f70fd16cb7db6c86868c8d71d0b29e90c59b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.002019  149888 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:00:35.002016  149888 cache.go:107] acquiring lock: {Name:mk49b15249a7e160c500977bbbe9a8f502e81573 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.002019  149888 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:00:35.002094  149888 start.go:364] duration metric: took 59.686µs to acquireMachinesLock for "no-preload-956479"
	I0826 12:00:35.002116  149888 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 12:00:35.001829  149888 cache.go:107] acquiring lock: {Name:mk5d40ed405db0ae63d7e065a4c18ccfefae4113 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:00:35.002118  149888 start.go:93] Provisioning new machine with config: &{Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:00:35.002216  149888 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 12:00:34.027196  149261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:00:34.049798  149261 node_ready.go:35] waiting up to 6m0s for node "pause-585941" to be "Ready" ...
	I0826 12:00:34.054057  149261 node_ready.go:49] node "pause-585941" has status "Ready":"True"
	I0826 12:00:34.054098  149261 node_ready.go:38] duration metric: took 4.251976ms for node "pause-585941" to be "Ready" ...
	I0826 12:00:34.054111  149261 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:00:34.061567  149261 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mrsqd" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.067482  149261 pod_ready.go:93] pod "coredns-6f6b679f8f-mrsqd" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:34.067506  149261 pod_ready.go:82] duration metric: took 5.909112ms for pod "coredns-6f6b679f8f-mrsqd" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.067516  149261 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.447179  149261 pod_ready.go:93] pod "etcd-pause-585941" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:34.447211  149261 pod_ready.go:82] duration metric: took 379.687173ms for pod "etcd-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.447226  149261 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.846805  149261 pod_ready.go:93] pod "kube-apiserver-pause-585941" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:34.846859  149261 pod_ready.go:82] duration metric: took 399.623625ms for pod "kube-apiserver-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:34.846875  149261 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:35.248359  149261 pod_ready.go:93] pod "kube-controller-manager-pause-585941" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:35.248395  149261 pod_ready.go:82] duration metric: took 401.510097ms for pod "kube-controller-manager-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:35.248410  149261 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-shqfk" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:35.647416  149261 pod_ready.go:93] pod "kube-proxy-shqfk" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:35.647445  149261 pod_ready.go:82] duration metric: took 399.027474ms for pod "kube-proxy-shqfk" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:35.647456  149261 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:36.049888  149261 pod_ready.go:93] pod "kube-scheduler-pause-585941" in "kube-system" namespace has status "Ready":"True"
	I0826 12:00:36.049914  149261 pod_ready.go:82] duration metric: took 402.45112ms for pod "kube-scheduler-pause-585941" in "kube-system" namespace to be "Ready" ...
	I0826 12:00:36.049923  149261 pod_ready.go:39] duration metric: took 1.995800352s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:00:36.049944  149261 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:00:36.049997  149261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:00:36.064434  149261 api_server.go:72] duration metric: took 2.247026675s to wait for apiserver process to appear ...
	I0826 12:00:36.064469  149261 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:00:36.064492  149261 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0826 12:00:36.069527  149261 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0826 12:00:36.070439  149261 api_server.go:141] control plane version: v1.31.0
	I0826 12:00:36.070461  149261 api_server.go:131] duration metric: took 5.98451ms to wait for apiserver health ...
	I0826 12:00:36.070472  149261 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:00:36.249831  149261 system_pods.go:59] 6 kube-system pods found
	I0826 12:00:36.249865  149261 system_pods.go:61] "coredns-6f6b679f8f-mrsqd" [87d3bb7c-c342-4e1d-a968-4bce3cffcd28] Running
	I0826 12:00:36.249875  149261 system_pods.go:61] "etcd-pause-585941" [7b3e42bb-dfb8-4e1c-a207-58ad5b4db4a5] Running
	I0826 12:00:36.249879  149261 system_pods.go:61] "kube-apiserver-pause-585941" [d87291db-5d08-4821-b0ef-8c69ad30903a] Running
	I0826 12:00:36.249885  149261 system_pods.go:61] "kube-controller-manager-pause-585941" [847f5f6f-0015-4dd3-a8c5-226b5f766d47] Running
	I0826 12:00:36.249891  149261 system_pods.go:61] "kube-proxy-shqfk" [78f3c9d3-c561-4dc3-b495-19ef43f0d35f] Running
	I0826 12:00:36.249896  149261 system_pods.go:61] "kube-scheduler-pause-585941" [11ab98fe-0037-44ff-b5dc-93bf9609bfee] Running
	I0826 12:00:36.249904  149261 system_pods.go:74] duration metric: took 179.425091ms to wait for pod list to return data ...
	I0826 12:00:36.249913  149261 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:00:36.448455  149261 default_sa.go:45] found service account: "default"
	I0826 12:00:36.448489  149261 default_sa.go:55] duration metric: took 198.568325ms for default service account to be created ...
	I0826 12:00:36.448502  149261 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:00:36.650172  149261 system_pods.go:86] 6 kube-system pods found
	I0826 12:00:36.650205  149261 system_pods.go:89] "coredns-6f6b679f8f-mrsqd" [87d3bb7c-c342-4e1d-a968-4bce3cffcd28] Running
	I0826 12:00:36.650210  149261 system_pods.go:89] "etcd-pause-585941" [7b3e42bb-dfb8-4e1c-a207-58ad5b4db4a5] Running
	I0826 12:00:36.650215  149261 system_pods.go:89] "kube-apiserver-pause-585941" [d87291db-5d08-4821-b0ef-8c69ad30903a] Running
	I0826 12:00:36.650219  149261 system_pods.go:89] "kube-controller-manager-pause-585941" [847f5f6f-0015-4dd3-a8c5-226b5f766d47] Running
	I0826 12:00:36.650222  149261 system_pods.go:89] "kube-proxy-shqfk" [78f3c9d3-c561-4dc3-b495-19ef43f0d35f] Running
	I0826 12:00:36.650226  149261 system_pods.go:89] "kube-scheduler-pause-585941" [11ab98fe-0037-44ff-b5dc-93bf9609bfee] Running
	I0826 12:00:36.650232  149261 system_pods.go:126] duration metric: took 201.72492ms to wait for k8s-apps to be running ...
	I0826 12:00:36.650238  149261 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:00:36.650282  149261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:00:36.669362  149261 system_svc.go:56] duration metric: took 19.109951ms WaitForService to wait for kubelet
	I0826 12:00:36.669398  149261 kubeadm.go:582] duration metric: took 2.851999188s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:00:36.669422  149261 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:00:36.848225  149261 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:00:36.848251  149261 node_conditions.go:123] node cpu capacity is 2
	I0826 12:00:36.848261  149261 node_conditions.go:105] duration metric: took 178.833653ms to run NodePressure ...
	I0826 12:00:36.848274  149261 start.go:241] waiting for startup goroutines ...
	I0826 12:00:36.848280  149261 start.go:246] waiting for cluster config update ...
	I0826 12:00:36.848288  149261 start.go:255] writing updated cluster config ...
	I0826 12:00:36.848638  149261 ssh_runner.go:195] Run: rm -f paused
	I0826 12:00:36.904551  149261 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:00:36.907425  149261 out.go:177] * Done! kubectl is now configured to use "pause-585941" cluster and "default" namespace by default
	I0826 12:00:37.253183  148739 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:00:37.253478  148739 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.851141021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16708526-f36e-4e69-90a8-9de084b1200a name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.852062451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2dff4b5-c208-41e7-85f1-c697d0dc4636 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.853123652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673639853088092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2dff4b5-c208-41e7-85f1-c697d0dc4636 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.853722187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=714070ca-a113-4742-87e8-ec8829ba21b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.853805681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=714070ca-a113-4742-87e8-ec8829ba21b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.854229169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4,PodSandboxId:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724673619555274726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d,PodSandboxId:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724673619150734950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8,PodSandboxId:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673614320141993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062a
b4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089,PodSandboxId:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673614324706174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96
459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645,PodSandboxId:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673614293679023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff4
3a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f,PodSandboxId:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673614286479684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447,PodSandboxId:d13d0acc47191f6eba05d2332cd065a9cd912931909cc85629d7fa873c776100,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673610058575599,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062ab4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849,PodSandboxId:78f3b45b5487c394d458fbfc8a148e1514e00c8a9d0549b94d123d0affc42e04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673610008785449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31,PodSandboxId:4c931acef42430515f9fc78e3f03d150fc2276c8c74a361548cf24496dbf219d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724673609902996676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308,PodSandboxId:36e9ce0ad42a1f21eaecaed3611f49d2823ff23492688a3eca049f5e9534e434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673609921164813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff43a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7,PodSandboxId:09814b5523e09d788dd8952d0787528c992736debd71bfb3345611bf513b0c11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673609828354550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310,PodSandboxId:607679c23203cfb0d1a11ec906713ca7e5803b05fc2e6d6699ab44a39271dbf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724673575916221993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=714070ca-a113-4742-87e8-ec8829ba21b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.861620509Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=676ade7e-1ce0-4e7c-a726-1c4efe909d67 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.861877493Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-mrsqd,Uid:87d3bb7c-c342-4e1d-a968-4bce3cffcd28,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724673619116022055,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:00:18.797020732Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-585941,Uid:96459d404f834c1cd8fb23fd9a90d3ad,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724673611744159134,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96459d404f834c1cd8fb23fd9a90d3ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 96459d404f834c1cd8fb23fd9a90d3ad,kubernetes.io/config.seen: 2024-08-26T11:59:30.242941162Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&PodSandboxMetadata{Name:kube-proxy-shqfk,Uid:78f3c9d3-c561-4dc3-b495-19ef43f0d35f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724673611663071672,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f3c9d3-c561-4dc3-b
495-19ef43f0d35f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T11:59:35.178162760Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-585941,Uid:65ef7b8a04cd9b96aff43a5ca9d895f2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724673611642404216,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff43a5ca9d895f2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.13:8443,kubernetes.io/config.hash: 65ef7b8a04cd9b96aff43a5ca9d895f2,kubernetes.io/config.seen: 2024-08-26T11:59:30.242939835Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{
Id:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-585941,Uid:98f1fc7062ab4e96b8797284f7062584,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724673611592461477,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062ab4e96b8797284f7062584,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 98f1fc7062ab4e96b8797284f7062584,kubernetes.io/config.seen: 2024-08-26T11:59:30.242941966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&PodSandboxMetadata{Name:etcd-pause-585941,Uid:456ce88ca87a63cec79c96c4bdf2547f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724673611581396455,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.13:2379,kubernetes.io/config.hash: 456ce88ca87a63cec79c96c4bdf2547f,kubernetes.io/config.seen: 2024-08-26T11:59:30.242935216Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=676ade7e-1ce0-4e7c-a726-1c4efe909d67 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.863740876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=929b75d7-7ade-4691-915e-5b84024f9633 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.863882579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=929b75d7-7ade-4691-915e-5b84024f9633 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.864139599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4,PodSandboxId:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724673619555274726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d,PodSandboxId:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724673619150734950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8,PodSandboxId:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673614320141993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062a
b4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089,PodSandboxId:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673614324706174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96
459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645,PodSandboxId:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673614293679023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff4
3a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f,PodSandboxId:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673614286479684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=929b75d7-7ade-4691-915e-5b84024f9633 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.897663509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4cd54804-c9e4-4ead-9af7-dacc7533eefd name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.897740069Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4cd54804-c9e4-4ead-9af7-dacc7533eefd name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.899190799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=951a285c-bbff-43ca-ac90-435710d3094c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.899683708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673639899556337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=951a285c-bbff-43ca-ac90-435710d3094c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.900235781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdb910b2-fb0e-4fda-9231-a9002741c34d name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.900297373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdb910b2-fb0e-4fda-9231-a9002741c34d name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.900559200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4,PodSandboxId:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724673619555274726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d,PodSandboxId:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724673619150734950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8,PodSandboxId:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673614320141993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062a
b4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089,PodSandboxId:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673614324706174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96
459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645,PodSandboxId:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673614293679023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff4
3a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f,PodSandboxId:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673614286479684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447,PodSandboxId:d13d0acc47191f6eba05d2332cd065a9cd912931909cc85629d7fa873c776100,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673610058575599,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062ab4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849,PodSandboxId:78f3b45b5487c394d458fbfc8a148e1514e00c8a9d0549b94d123d0affc42e04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673610008785449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31,PodSandboxId:4c931acef42430515f9fc78e3f03d150fc2276c8c74a361548cf24496dbf219d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724673609902996676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308,PodSandboxId:36e9ce0ad42a1f21eaecaed3611f49d2823ff23492688a3eca049f5e9534e434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673609921164813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff43a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7,PodSandboxId:09814b5523e09d788dd8952d0787528c992736debd71bfb3345611bf513b0c11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673609828354550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310,PodSandboxId:607679c23203cfb0d1a11ec906713ca7e5803b05fc2e6d6699ab44a39271dbf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724673575916221993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdb910b2-fb0e-4fda-9231-a9002741c34d name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.946013477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97652f8c-c28a-4e6b-8e97-3f0c579b1daa name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.946087787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97652f8c-c28a-4e6b-8e97-3f0c579b1daa name=/runtime.v1.RuntimeService/Version
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.947339628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f659a5e-e95d-43bb-af89-9fe4b6003bcf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.947708155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673639947686593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f659a5e-e95d-43bb-af89-9fe4b6003bcf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.948377448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe9d88c4-ce53-493e-9771-2d241a5f4c6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.948430899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe9d88c4-ce53-493e-9771-2d241a5f4c6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:00:39 pause-585941 crio[2644]: time="2024-08-26 12:00:39.948683100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4,PodSandboxId:2c9952f7bd6105e0cf12dec7ddb6be4dd65856e8198b1bc05992a2984ad21491,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724673619555274726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d,PodSandboxId:1ce2c4cdf77c15b64d76ac138a93d6bfd14a57476b61b2057f9087e1773c8ecb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724673619150734950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8,PodSandboxId:6aa452da73c086f608dcf8aa0b15a198b9538e8b44f74e44b66b6c7f3dbe3578,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724673614320141993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062a
b4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089,PodSandboxId:6cf625437914ac1aa8f0bbc39f2a4ee9f7a1322a7b490eeed676af6e383429ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724673614324706174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96
459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645,PodSandboxId:e0754514a1af7c92349ba967b8a05e975412535f51930cc77f4b98dd669180fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724673614293679023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff4
3a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f,PodSandboxId:d7dcbe5d5847b892f87ad5802f9ed0bce5f4f316f852cc6e27d4c8dc2acc3782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724673614286479684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447,PodSandboxId:d13d0acc47191f6eba05d2332cd065a9cd912931909cc85629d7fa873c776100,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724673610058575599,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f1fc7062ab4e96b8797284f7062584,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849,PodSandboxId:78f3b45b5487c394d458fbfc8a148e1514e00c8a9d0549b94d123d0affc42e04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724673610008785449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96459d404f834c1cd8fb23fd9a90d3ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31,PodSandboxId:4c931acef42430515f9fc78e3f03d150fc2276c8c74a361548cf24496dbf219d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724673609902996676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shqfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f3c9d3-c561-4dc3-b495-19ef43f0d35f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308,PodSandboxId:36e9ce0ad42a1f21eaecaed3611f49d2823ff23492688a3eca049f5e9534e434,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724673609921164813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ef7b8a04cd9b96aff43a5ca9d895f2,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7,PodSandboxId:09814b5523e09d788dd8952d0787528c992736debd71bfb3345611bf513b0c11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724673609828354550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-585941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456ce88ca87a63cec79c96c4bdf2547f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310,PodSandboxId:607679c23203cfb0d1a11ec906713ca7e5803b05fc2e6d6699ab44a39271dbf7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724673575916221993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mrsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d3bb7c-c342-4e1d-a968-4bce3cffcd28,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe9d88c4-ce53-493e-9771-2d241a5f4c6f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c094dee5b14a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago       Running             coredns                   1                   2c9952f7bd610       coredns-6f6b679f8f-mrsqd
	4ebdd5fc6db6b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   20 seconds ago       Running             kube-proxy                2                   1ce2c4cdf77c1       kube-proxy-shqfk
	93206164dd2af       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   25 seconds ago       Running             kube-controller-manager   2                   6cf625437914a       kube-controller-manager-pause-585941
	ab77ab688e7cd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   25 seconds ago       Running             kube-scheduler            2                   6aa452da73c08       kube-scheduler-pause-585941
	97fbdef95de43       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   25 seconds ago       Running             kube-apiserver            2                   e0754514a1af7       kube-apiserver-pause-585941
	ef1d0dd05ae5d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago       Running             etcd                      2                   d7dcbe5d5847b       etcd-pause-585941
	0f82b47d9fac5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   29 seconds ago       Exited              kube-scheduler            1                   d13d0acc47191       kube-scheduler-pause-585941
	0dadcb969f785       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   30 seconds ago       Exited              kube-controller-manager   1                   78f3b45b5487c       kube-controller-manager-pause-585941
	35036f2f4de24       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   30 seconds ago       Exited              kube-apiserver            1                   36e9ce0ad42a1       kube-apiserver-pause-585941
	385b96b79bf94       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   30 seconds ago       Exited              kube-proxy                1                   4c931acef4243       kube-proxy-shqfk
	e7a660152d69a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   30 seconds ago       Exited              etcd                      1                   09814b5523e09       etcd-pause-585941
	cd61163464af7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   607679c23203c       coredns-6f6b679f8f-mrsqd
	
	
	==> coredns [c094dee5b14a36c4da1c4ef9df8eec18ec28fa3f8a1210c43cecce8381bba7e4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57429 - 52252 "HINFO IN 2039932744837135984.6481832000090125629. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011029981s
	
	
	==> coredns [cd61163464af739a5ae9a018d36ce4a79c0f113dcad998d33c51b4a8fd824310] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57574 - 1926 "HINFO IN 7009155030966930451.2391654659985760990. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012717476s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-585941
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-585941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=pause-585941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T11_59_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 11:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-585941
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:00:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:00:18 +0000   Mon, 26 Aug 2024 11:59:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:00:18 +0000   Mon, 26 Aug 2024 11:59:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:00:18 +0000   Mon, 26 Aug 2024 11:59:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:00:18 +0000   Mon, 26 Aug 2024 11:59:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    pause-585941
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff021fd9adab4575aadab910f3963079
	  System UUID:                ff021fd9-adab-4575-aada-b910f3963079
	  Boot ID:                    38aad1a3-799b-43dc-822f-9b375e5ed885
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-mrsqd                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     65s
	  kube-system                 etcd-pause-585941                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         71s
	  kube-system                 kube-apiserver-pause-585941             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-pause-585941    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-shqfk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-scheduler-pause-585941             100m (5%)     0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     70s                kubelet          Node pause-585941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node pause-585941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node pause-585941 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeReady                69s                kubelet          Node pause-585941 status is now: NodeReady
	  Normal  RegisteredNode           66s                node-controller  Node pause-585941 event: Registered Node pause-585941 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-585941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-585941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-585941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                node-controller  Node pause-585941 event: Registered Node pause-585941 in Controller
	
	
	==> dmesg <==
	[ +11.420830] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.063785] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054376] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.176532] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.158713] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.278199] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.234799] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.170584] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.066438] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.499794] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.077017] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.783550] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.698878] kauditd_printk_skb: 43 callbacks suppressed
	[Aug26 12:00] systemd-fstab-generator[2025]: Ignoring "noauto" option for root device
	[  +0.077967] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.053600] systemd-fstab-generator[2037]: Ignoring "noauto" option for root device
	[  +0.169206] systemd-fstab-generator[2051]: Ignoring "noauto" option for root device
	[  +0.147773] systemd-fstab-generator[2064]: Ignoring "noauto" option for root device
	[  +0.892394] systemd-fstab-generator[2405]: Ignoring "noauto" option for root device
	[  +1.103686] systemd-fstab-generator[2738]: Ignoring "noauto" option for root device
	[  +2.292054] systemd-fstab-generator[3038]: Ignoring "noauto" option for root device
	[  +0.347222] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.312387] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.688385] kauditd_printk_skb: 16 callbacks suppressed
	[  +4.977335] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	
	
	==> etcd [e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7] <==
	
	
	==> etcd [ef1d0dd05ae5d73d5a61e2918ffe8b9ffc7d32815c3ca9c86369cbaebeb3b84f] <==
	{"level":"info","ts":"2024-08-26T12:00:14.652022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","added-peer-id":"1d3fba3e6c6ecbcd","added-peer-peer-urls":["https://192.168.39.13:2380"]}
	{"level":"info","ts":"2024-08-26T12:00:14.652229Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:00:14.652282Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:00:14.655274Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:14.657391Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-08-26T12:00:14.657447Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-08-26T12:00:14.657175Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T12:00:14.657639Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1d3fba3e6c6ecbcd","initial-advertise-peer-urls":["https://192.168.39.13:2380"],"listen-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T12:00:14.657673Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T12:00:16.420427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:16.420502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:16.420552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd received MsgPreVoteResp from 1d3fba3e6c6ecbcd at term 2"}
	{"level":"info","ts":"2024-08-26T12:00:16.420581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became candidate at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:16.420589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd received MsgVoteResp from 1d3fba3e6c6ecbcd at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:16.420602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became leader at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:16.420613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d3fba3e6c6ecbcd elected leader 1d3fba3e6c6ecbcd at term 3"}
	{"level":"info","ts":"2024-08-26T12:00:16.426157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:00:16.426400Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:00:16.426775Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T12:00:16.426810Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T12:00:16.426153Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1d3fba3e6c6ecbcd","local-member-attributes":"{Name:pause-585941 ClientURLs:[https://192.168.39.13:2379]}","request-path":"/0/members/1d3fba3e6c6ecbcd/attributes","cluster-id":"1e01947a35a5ac2c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T12:00:16.427902Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:16.428031Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:00:16.429197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T12:00:16.429197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.13:2379"}
	
	
	==> kernel <==
	 12:00:40 up 1 min,  0 users,  load average: 1.15, 0.47, 0.17
	Linux pause-585941 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308] <==
	
	
	==> kube-apiserver [97fbdef95de430cd6bcc99d3ec23fb9ce7dc88843ef8f1080af7024969ade645] <==
	I0826 12:00:17.924369       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0826 12:00:17.931726       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0826 12:00:17.931987       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0826 12:00:17.932217       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0826 12:00:17.932265       1 shared_informer.go:320] Caches are synced for configmaps
	I0826 12:00:17.932320       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0826 12:00:17.932345       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0826 12:00:17.961402       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0826 12:00:17.962330       1 aggregator.go:171] initial CRD sync complete...
	I0826 12:00:17.962376       1 autoregister_controller.go:144] Starting autoregister controller
	I0826 12:00:17.962383       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0826 12:00:17.962390       1 cache.go:39] Caches are synced for autoregister controller
	E0826 12:00:17.966478       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0826 12:00:17.968605       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0826 12:00:17.983319       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0826 12:00:17.983415       1 policy_source.go:224] refreshing policies
	I0826 12:00:18.044890       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0826 12:00:18.837522       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0826 12:00:20.070466       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0826 12:00:20.089554       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0826 12:00:20.158550       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0826 12:00:20.195088       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0826 12:00:20.205173       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0826 12:00:21.497454       1 controller.go:615] quota admission added evaluator for: endpoints
	I0826 12:00:21.649402       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849] <==
	
	
	==> kube-controller-manager [93206164dd2af63ed1b7f13e57047f3efb6a09fee5cc2130a9efc0c5d80ba089] <==
	I0826 12:00:21.243547       1 shared_informer.go:320] Caches are synced for ephemeral
	I0826 12:00:21.243599       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0826 12:00:21.243620       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0826 12:00:21.244952       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0826 12:00:21.245120       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0826 12:00:21.245181       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0826 12:00:21.245273       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0826 12:00:21.245440       1 shared_informer.go:320] Caches are synced for service account
	I0826 12:00:21.245514       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0826 12:00:21.253101       1 shared_informer.go:320] Caches are synced for HPA
	I0826 12:00:21.253144       1 shared_informer.go:320] Caches are synced for namespace
	I0826 12:00:21.307251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="128.922427ms"
	I0826 12:00:21.307700       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="52.163µs"
	I0826 12:00:21.316387       1 shared_informer.go:320] Caches are synced for attach detach
	I0826 12:00:21.428835       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0826 12:00:21.444476       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0826 12:00:21.456980       1 shared_informer.go:320] Caches are synced for resource quota
	I0826 12:00:21.474461       1 shared_informer.go:320] Caches are synced for resource quota
	I0826 12:00:21.487050       1 shared_informer.go:320] Caches are synced for disruption
	I0826 12:00:21.493818       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0826 12:00:21.875627       1 shared_informer.go:320] Caches are synced for garbage collector
	I0826 12:00:21.926433       1 shared_informer.go:320] Caches are synced for garbage collector
	I0826 12:00:21.926527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0826 12:00:28.898568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="14.477644ms"
	I0826 12:00:28.899054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="41.804µs"
	
	
	==> kube-proxy [385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31] <==
	
	
	==> kube-proxy [4ebdd5fc6db6bc453ae3f566447a116c77a595eb3e31363122e4c86faca0f06d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 12:00:19.460475       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 12:00:19.478567       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
	E0826 12:00:19.478664       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 12:00:19.540288       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 12:00:19.540342       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 12:00:19.540367       1 server_linux.go:169] "Using iptables Proxier"
	I0826 12:00:19.543290       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 12:00:19.543526       1 server.go:483] "Version info" version="v1.31.0"
	I0826 12:00:19.543560       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:00:19.545190       1 config.go:197] "Starting service config controller"
	I0826 12:00:19.545229       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 12:00:19.545251       1 config.go:104] "Starting endpoint slice config controller"
	I0826 12:00:19.545255       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 12:00:19.545761       1 config.go:326] "Starting node config controller"
	I0826 12:00:19.545785       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 12:00:19.646800       1 shared_informer.go:320] Caches are synced for service config
	I0826 12:00:19.646782       1 shared_informer.go:320] Caches are synced for node config
	I0826 12:00:19.646830       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447] <==
	
	
	==> kube-scheduler [ab77ab688e7cd5ab72c2be2b1dfbc33989c0223fb6228a674d8f30537664c4d8] <==
	I0826 12:00:15.270817       1 serving.go:386] Generated self-signed cert in-memory
	W0826 12:00:17.880766       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0826 12:00:17.880819       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0826 12:00:17.880833       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0826 12:00:17.880878       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0826 12:00:17.959156       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0826 12:00:17.959196       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:00:17.964749       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0826 12:00:17.964984       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0826 12:00:17.965033       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0826 12:00:17.965066       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0826 12:00:18.065560       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:00:14 pause-585941 kubelet[3045]: E0826 12:00:14.189412    3045 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.13:8443: connect: connection refused" node="pause-585941"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.266976    3045 scope.go:117] "RemoveContainer" containerID="e7a660152d69a99a4ab2631a656b671461420ec3754405fd93b4ed43cc6b7ed7"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.268097    3045 scope.go:117] "RemoveContainer" containerID="35036f2f4de24fd33ccfc63896678cff7c9e7e59d77dbc734f1ed309069a2308"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.269782    3045 scope.go:117] "RemoveContainer" containerID="0dadcb969f785b4aeb4e83e4a8c2e30e6b1b02edcc334e6e26ad581b48b54849"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.272949    3045 scope.go:117] "RemoveContainer" containerID="0f82b47d9fac5217f0ca37e73b998a22eda20555eae5cc05c44adb25b3dc0447"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: E0826 12:00:14.425479    3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-585941?timeout=10s\": dial tcp 192.168.39.13:8443: connect: connection refused" interval="800ms"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: I0826 12:00:14.591592    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-585941"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: E0826 12:00:14.592420    3045 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.13:8443: connect: connection refused" node="pause-585941"
	Aug 26 12:00:14 pause-585941 kubelet[3045]: W0826 12:00:14.606540    3045 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-585941&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8443: connect: connection refused
	Aug 26 12:00:14 pause-585941 kubelet[3045]: E0826 12:00:14.606624    3045 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-585941&limit=500&resourceVersion=0\": dial tcp 192.168.39.13:8443: connect: connection refused" logger="UnhandledError"
	Aug 26 12:00:15 pause-585941 kubelet[3045]: I0826 12:00:15.394321    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-585941"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.065110    3045 kubelet_node_status.go:111] "Node was previously registered" node="pause-585941"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.065250    3045 kubelet_node_status.go:75] "Successfully registered node" node="pause-585941"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.065290    3045 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.066662    3045 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.794171    3045 apiserver.go:52] "Watching apiserver"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.814233    3045 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.905243    3045 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78f3c9d3-c561-4dc3-b495-19ef43f0d35f-xtables-lock\") pod \"kube-proxy-shqfk\" (UID: \"78f3c9d3-c561-4dc3-b495-19ef43f0d35f\") " pod="kube-system/kube-proxy-shqfk"
	Aug 26 12:00:18 pause-585941 kubelet[3045]: I0826 12:00:18.905441    3045 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78f3c9d3-c561-4dc3-b495-19ef43f0d35f-lib-modules\") pod \"kube-proxy-shqfk\" (UID: \"78f3c9d3-c561-4dc3-b495-19ef43f0d35f\") " pod="kube-system/kube-proxy-shqfk"
	Aug 26 12:00:19 pause-585941 kubelet[3045]: I0826 12:00:19.098916    3045 scope.go:117] "RemoveContainer" containerID="385b96b79bf941aa2eb2f3ffd514e888f972da9c8cc70b9bec9dee2db204fe31"
	Aug 26 12:00:23 pause-585941 kubelet[3045]: E0826 12:00:23.886286    3045 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673623885949756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:00:23 pause-585941 kubelet[3045]: E0826 12:00:23.886582    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673623885949756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:00:28 pause-585941 kubelet[3045]: I0826 12:00:28.869502    3045 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 26 12:00:33 pause-585941 kubelet[3045]: E0826 12:00:33.888771    3045 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673633888207823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:00:33 pause-585941 kubelet[3045]: E0826 12:00:33.888800    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724673633888207823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-585941 -n pause-585941
helpers_test.go:261: (dbg) Run:  kubectl --context pause-585941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (57.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-956479 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-956479 --alsologtostderr -v=3: exit status 82 (2m0.62238535s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-956479"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 12:02:03.254294  150987 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:02:03.254716  150987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:02:03.254730  150987 out.go:358] Setting ErrFile to fd 2...
	I0826 12:02:03.254737  150987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:02:03.255047  150987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:02:03.255312  150987 out.go:352] Setting JSON to false
	I0826 12:02:03.255388  150987 mustload.go:65] Loading cluster: no-preload-956479
	I0826 12:02:03.255724  150987 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:02:03.255792  150987 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:02:03.255948  150987 mustload.go:65] Loading cluster: no-preload-956479
	I0826 12:02:03.256048  150987 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:02:03.256073  150987 stop.go:39] StopHost: no-preload-956479
	I0826 12:02:03.256440  150987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:02:03.256486  150987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:02:03.274146  150987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0826 12:02:03.274708  150987 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:02:03.275344  150987 main.go:141] libmachine: Using API Version  1
	I0826 12:02:03.275370  150987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:02:03.275817  150987 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:02:03.278600  150987 out.go:177] * Stopping node "no-preload-956479"  ...
	I0826 12:02:03.279818  150987 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0826 12:02:03.279876  150987 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:02:03.280199  150987 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0826 12:02:03.280240  150987 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:02:03.284366  150987 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:02:03.284838  150987 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:02:03.284871  150987 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:02:03.285013  150987 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:02:03.285202  150987 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:02:03.285384  150987 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:02:03.285536  150987 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:02:03.381081  150987 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0826 12:02:03.439638  150987 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0826 12:02:03.493408  150987 main.go:141] libmachine: Stopping "no-preload-956479"...
	I0826 12:02:03.493447  150987 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:02:03.495305  150987 main.go:141] libmachine: (no-preload-956479) Calling .Stop
	I0826 12:02:03.499023  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 0/120
	I0826 12:02:04.500702  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 1/120
	I0826 12:02:05.501870  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 2/120
	I0826 12:02:06.503335  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 3/120
	I0826 12:02:07.504821  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 4/120
	I0826 12:02:08.507476  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 5/120
	I0826 12:02:09.508994  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 6/120
	I0826 12:02:10.510624  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 7/120
	I0826 12:02:11.512364  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 8/120
	I0826 12:02:12.514077  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 9/120
	I0826 12:02:13.516061  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 10/120
	I0826 12:02:14.517892  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 11/120
	I0826 12:02:15.519524  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 12/120
	I0826 12:02:16.521403  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 13/120
	I0826 12:02:17.523505  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 14/120
	I0826 12:02:18.525825  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 15/120
	I0826 12:02:19.527608  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 16/120
	I0826 12:02:20.529073  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 17/120
	I0826 12:02:21.531515  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 18/120
	I0826 12:02:22.533578  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 19/120
	I0826 12:02:23.535600  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 20/120
	I0826 12:02:24.537022  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 21/120
	I0826 12:02:25.538418  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 22/120
	I0826 12:02:26.540868  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 23/120
	I0826 12:02:27.542634  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 24/120
	I0826 12:02:28.544507  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 25/120
	I0826 12:02:29.546304  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 26/120
	I0826 12:02:30.547695  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 27/120
	I0826 12:02:31.550186  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 28/120
	I0826 12:02:32.551562  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 29/120
	I0826 12:02:33.553869  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 30/120
	I0826 12:02:34.555260  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 31/120
	I0826 12:02:35.556716  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 32/120
	I0826 12:02:36.558434  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 33/120
	I0826 12:02:37.560136  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 34/120
	I0826 12:02:38.562261  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 35/120
	I0826 12:02:39.563884  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 36/120
	I0826 12:02:40.565842  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 37/120
	I0826 12:02:41.567363  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 38/120
	I0826 12:02:42.569166  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 39/120
	I0826 12:02:43.571654  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 40/120
	I0826 12:02:44.573590  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 41/120
	I0826 12:02:45.574879  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 42/120
	I0826 12:02:46.576684  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 43/120
	I0826 12:02:47.578526  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 44/120
	I0826 12:02:48.580501  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 45/120
	I0826 12:02:49.581732  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 46/120
	I0826 12:02:50.583564  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 47/120
	I0826 12:02:51.585439  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 48/120
	I0826 12:02:52.587059  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 49/120
	I0826 12:02:53.589303  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 50/120
	I0826 12:02:54.591225  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 51/120
	I0826 12:02:55.592561  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 52/120
	I0826 12:02:56.594356  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 53/120
	I0826 12:02:57.596089  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 54/120
	I0826 12:02:58.598788  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 55/120
	I0826 12:02:59.600251  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 56/120
	I0826 12:03:00.602061  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 57/120
	I0826 12:03:01.603588  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 58/120
	I0826 12:03:02.605006  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 59/120
	I0826 12:03:03.607463  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 60/120
	I0826 12:03:04.609035  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 61/120
	I0826 12:03:05.610997  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 62/120
	I0826 12:03:06.613700  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 63/120
	I0826 12:03:07.615884  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 64/120
	I0826 12:03:08.617892  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 65/120
	I0826 12:03:09.619428  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 66/120
	I0826 12:03:10.621982  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 67/120
	I0826 12:03:11.624293  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 68/120
	I0826 12:03:12.625850  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 69/120
	I0826 12:03:13.628144  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 70/120
	I0826 12:03:14.630000  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 71/120
	I0826 12:03:15.631484  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 72/120
	I0826 12:03:16.633768  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 73/120
	I0826 12:03:17.635836  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 74/120
	I0826 12:03:18.735637  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 75/120
	I0826 12:03:19.737545  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 76/120
	I0826 12:03:20.740096  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 77/120
	I0826 12:03:21.741785  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 78/120
	I0826 12:03:22.743440  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 79/120
	I0826 12:03:23.746047  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 80/120
	I0826 12:03:24.747622  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 81/120
	I0826 12:03:25.749504  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 82/120
	I0826 12:03:26.751314  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 83/120
	I0826 12:03:27.753011  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 84/120
	I0826 12:03:28.755313  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 85/120
	I0826 12:03:29.756972  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 86/120
	I0826 12:03:30.758574  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 87/120
	I0826 12:03:31.760198  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 88/120
	I0826 12:03:32.761645  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 89/120
	I0826 12:03:33.763775  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 90/120
	I0826 12:03:34.765531  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 91/120
	I0826 12:03:35.766950  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 92/120
	I0826 12:03:36.768550  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 93/120
	I0826 12:03:37.770053  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 94/120
	I0826 12:03:38.772376  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 95/120
	I0826 12:03:39.774298  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 96/120
	I0826 12:03:40.775698  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 97/120
	I0826 12:03:41.778165  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 98/120
	I0826 12:03:42.780512  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 99/120
	I0826 12:03:43.782351  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 100/120
	I0826 12:03:44.784274  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 101/120
	I0826 12:03:45.786870  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 102/120
	I0826 12:03:46.788796  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 103/120
	I0826 12:03:47.790732  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 104/120
	I0826 12:03:48.792477  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 105/120
	I0826 12:03:49.794190  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 106/120
	I0826 12:03:50.795758  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 107/120
	I0826 12:03:51.797461  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 108/120
	I0826 12:03:52.799127  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 109/120
	I0826 12:03:53.800909  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 110/120
	I0826 12:03:54.802960  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 111/120
	I0826 12:03:55.804479  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 112/120
	I0826 12:03:56.806563  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 113/120
	I0826 12:03:57.808825  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 114/120
	I0826 12:03:58.810565  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 115/120
	I0826 12:03:59.811870  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 116/120
	I0826 12:04:00.814268  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 117/120
	I0826 12:04:01.815646  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 118/120
	I0826 12:04:02.817876  150987 main.go:141] libmachine: (no-preload-956479) Waiting for machine to stop 119/120
	I0826 12:04:03.819207  150987 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0826 12:04:03.819290  150987 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0826 12:04:03.821070  150987 out.go:201] 
	W0826 12:04:03.822384  150987 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0826 12:04:03.822405  150987 out.go:270] * 
	* 
	W0826 12:04:03.826458  150987 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:04:03.827797  150987 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-956479 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479: exit status 3 (18.489191445s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:04:22.319201  152057 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E0826 12:04:22.319224  152057 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-956479" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-923586 --alsologtostderr -v=3
E0826 12:02:20.477077  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-923586 --alsologtostderr -v=3: exit status 82 (2m0.618100873s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-923586"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 12:02:06.257232  151071 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:02:06.257457  151071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:02:06.257465  151071 out.go:358] Setting ErrFile to fd 2...
	I0826 12:02:06.257470  151071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:02:06.257668  151071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:02:06.257885  151071 out.go:352] Setting JSON to false
	I0826 12:02:06.257964  151071 mustload.go:65] Loading cluster: embed-certs-923586
	I0826 12:02:06.259490  151071 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:02:06.259591  151071 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/config.json ...
	I0826 12:02:06.259804  151071 mustload.go:65] Loading cluster: embed-certs-923586
	I0826 12:02:06.259911  151071 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:02:06.259937  151071 stop.go:39] StopHost: embed-certs-923586
	I0826 12:02:06.260288  151071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:02:06.260330  151071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:02:06.277173  151071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0826 12:02:06.277707  151071 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:02:06.278333  151071 main.go:141] libmachine: Using API Version  1
	I0826 12:02:06.278354  151071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:02:06.278771  151071 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:02:06.281244  151071 out.go:177] * Stopping node "embed-certs-923586"  ...
	I0826 12:02:06.282513  151071 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0826 12:02:06.282547  151071 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:02:06.282806  151071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0826 12:02:06.282858  151071 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:02:06.286237  151071 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:02:06.286712  151071 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:01:14 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:02:06.286740  151071 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:02:06.286951  151071 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:02:06.287135  151071 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:02:06.287323  151071 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:02:06.287427  151071 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:02:06.384308  151071 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0826 12:02:06.442887  151071 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0826 12:02:06.499757  151071 main.go:141] libmachine: Stopping "embed-certs-923586"...
	I0826 12:02:06.499794  151071 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:02:06.501507  151071 main.go:141] libmachine: (embed-certs-923586) Calling .Stop
	I0826 12:02:06.505299  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 0/120
	I0826 12:02:07.506386  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 1/120
	I0826 12:02:08.507806  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 2/120
	I0826 12:02:09.509262  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 3/120
	I0826 12:02:10.510797  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 4/120
	I0826 12:02:11.512969  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 5/120
	I0826 12:02:12.514540  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 6/120
	I0826 12:02:13.516703  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 7/120
	I0826 12:02:14.517892  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 8/120
	I0826 12:02:15.519756  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 9/120
	I0826 12:02:16.521532  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 10/120
	I0826 12:02:17.524019  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 11/120
	I0826 12:02:18.525612  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 12/120
	I0826 12:02:19.527325  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 13/120
	I0826 12:02:20.528785  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 14/120
	I0826 12:02:21.531040  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 15/120
	I0826 12:02:22.533096  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 16/120
	I0826 12:02:23.534591  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 17/120
	I0826 12:02:24.536348  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 18/120
	I0826 12:02:25.538012  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 19/120
	I0826 12:02:26.540168  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 20/120
	I0826 12:02:27.541934  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 21/120
	I0826 12:02:28.543805  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 22/120
	I0826 12:02:29.546213  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 23/120
	I0826 12:02:30.547698  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 24/120
	I0826 12:02:31.550409  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 25/120
	I0826 12:02:32.551888  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 26/120
	I0826 12:02:33.553495  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 27/120
	I0826 12:02:34.555067  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 28/120
	I0826 12:02:35.556598  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 29/120
	I0826 12:02:36.558358  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 30/120
	I0826 12:02:37.559940  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 31/120
	I0826 12:02:38.561634  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 32/120
	I0826 12:02:39.563488  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 33/120
	I0826 12:02:40.565219  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 34/120
	I0826 12:02:41.567363  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 35/120
	I0826 12:02:42.569478  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 36/120
	I0826 12:02:43.571853  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 37/120
	I0826 12:02:44.573431  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 38/120
	I0826 12:02:45.574880  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 39/120
	I0826 12:02:46.576488  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 40/120
	I0826 12:02:47.578394  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 41/120
	I0826 12:02:48.580032  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 42/120
	I0826 12:02:49.581555  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 43/120
	I0826 12:02:50.583415  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 44/120
	I0826 12:02:51.585766  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 45/120
	I0826 12:02:52.587180  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 46/120
	I0826 12:02:53.589302  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 47/120
	I0826 12:02:54.590926  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 48/120
	I0826 12:02:55.592426  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 49/120
	I0826 12:02:56.595165  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 50/120
	I0826 12:02:57.596698  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 51/120
	I0826 12:02:58.598775  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 52/120
	I0826 12:02:59.600257  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 53/120
	I0826 12:03:00.601813  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 54/120
	I0826 12:03:01.604028  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 55/120
	I0826 12:03:02.605954  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 56/120
	I0826 12:03:03.607626  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 57/120
	I0826 12:03:04.609142  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 58/120
	I0826 12:03:05.610882  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 59/120
	I0826 12:03:06.613370  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 60/120
	I0826 12:03:07.615212  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 61/120
	I0826 12:03:08.617486  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 62/120
	I0826 12:03:09.619161  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 63/120
	I0826 12:03:10.621713  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 64/120
	I0826 12:03:11.624140  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 65/120
	I0826 12:03:12.625748  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 66/120
	I0826 12:03:13.627526  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 67/120
	I0826 12:03:14.629199  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 68/120
	I0826 12:03:15.630945  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 69/120
	I0826 12:03:16.633213  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 70/120
	I0826 12:03:17.635070  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 71/120
	I0826 12:03:18.735505  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 72/120
	I0826 12:03:19.737408  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 73/120
	I0826 12:03:20.739333  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 74/120
	I0826 12:03:21.741785  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 75/120
	I0826 12:03:22.743614  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 76/120
	I0826 12:03:23.745671  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 77/120
	I0826 12:03:24.747456  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 78/120
	I0826 12:03:25.749144  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 79/120
	I0826 12:03:26.751447  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 80/120
	I0826 12:03:27.753356  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 81/120
	I0826 12:03:28.754997  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 82/120
	I0826 12:03:29.756631  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 83/120
	I0826 12:03:30.758255  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 84/120
	I0826 12:03:31.760065  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 85/120
	I0826 12:03:32.761767  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 86/120
	I0826 12:03:33.763508  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 87/120
	I0826 12:03:34.765013  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 88/120
	I0826 12:03:35.766573  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 89/120
	I0826 12:03:36.768874  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 90/120
	I0826 12:03:37.770786  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 91/120
	I0826 12:03:38.772255  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 92/120
	I0826 12:03:39.773984  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 93/120
	I0826 12:03:40.775392  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 94/120
	I0826 12:03:41.777974  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 95/120
	I0826 12:03:42.779818  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 96/120
	I0826 12:03:43.781468  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 97/120
	I0826 12:03:44.783494  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 98/120
	I0826 12:03:45.786019  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 99/120
	I0826 12:03:46.788495  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 100/120
	I0826 12:03:47.790198  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 101/120
	I0826 12:03:48.791730  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 102/120
	I0826 12:03:49.793503  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 103/120
	I0826 12:03:50.795222  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 104/120
	I0826 12:03:51.797790  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 105/120
	I0826 12:03:52.799203  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 106/120
	I0826 12:03:53.800628  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 107/120
	I0826 12:03:54.802378  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 108/120
	I0826 12:03:55.804286  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 109/120
	I0826 12:03:56.806316  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 110/120
	I0826 12:03:57.808101  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 111/120
	I0826 12:03:58.809556  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 112/120
	I0826 12:03:59.811261  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 113/120
	I0826 12:04:00.813610  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 114/120
	I0826 12:04:01.815485  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 115/120
	I0826 12:04:02.817203  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 116/120
	I0826 12:04:03.818716  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 117/120
	I0826 12:04:04.820258  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 118/120
	I0826 12:04:05.822045  151071 main.go:141] libmachine: (embed-certs-923586) Waiting for machine to stop 119/120
	I0826 12:04:06.823331  151071 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0826 12:04:06.823414  151071 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0826 12:04:06.825188  151071 out.go:201] 
	W0826 12:04:06.826566  151071 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0826 12:04:06.826591  151071 out.go:270] * 
	* 
	W0826 12:04:06.829309  151071 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:04:06.830697  151071 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-923586 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586: exit status 3 (18.559112106s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:04:25.391263  152132 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0826 12:04:25.391292  152132 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-923586" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-839656 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-839656 create -f testdata/busybox.yaml: exit status 1 (52.354018ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-839656" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-839656 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 6 (250.756423ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:03:46.563466  151918 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-839656" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 6 (271.425805ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:03:46.832643  151948 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-839656" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-839656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-839656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.028728188s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-839656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-839656 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-839656 describe deploy/metrics-server -n kube-system: exit status 1 (46.715857ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-839656" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-839656 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 6 (241.407145ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:05:37.156211  152851 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-839656" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479: exit status 3 (3.170379243s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:04:25.487210  152237 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E0826 12:04:25.487228  152237 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-956479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-956479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15075311s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-956479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479
E0826 12:04:34.326988  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479: exit status 3 (3.063284231s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:04:34.703371  152433 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E0826 12:04:34.703396  152433 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-956479" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-697869 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-697869 --alsologtostderr -v=3: exit status 82 (2m0.535824445s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-697869"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 12:04:24.392898  152317 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:04:24.393171  152317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:04:24.393180  152317 out.go:358] Setting ErrFile to fd 2...
	I0826 12:04:24.393184  152317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:04:24.393351  152317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:04:24.393578  152317 out.go:352] Setting JSON to false
	I0826 12:04:24.393661  152317 mustload.go:65] Loading cluster: default-k8s-diff-port-697869
	I0826 12:04:24.393969  152317 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:04:24.394034  152317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:04:24.394193  152317 mustload.go:65] Loading cluster: default-k8s-diff-port-697869
	I0826 12:04:24.394292  152317 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:04:24.394316  152317 stop.go:39] StopHost: default-k8s-diff-port-697869
	I0826 12:04:24.394666  152317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:04:24.394711  152317 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:04:24.410011  152317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33315
	I0826 12:04:24.410497  152317 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:04:24.411157  152317 main.go:141] libmachine: Using API Version  1
	I0826 12:04:24.411187  152317 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:04:24.411522  152317 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:04:24.413733  152317 out.go:177] * Stopping node "default-k8s-diff-port-697869"  ...
	I0826 12:04:24.414914  152317 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0826 12:04:24.414955  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:04:24.415222  152317 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0826 12:04:24.415269  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:04:24.417985  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:04:24.418408  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:03:33 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:04:24.418444  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:04:24.418540  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:04:24.418738  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:04:24.418910  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:04:24.419076  152317 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:04:24.524722  152317 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0826 12:04:24.583435  152317 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0826 12:04:24.667042  152317 main.go:141] libmachine: Stopping "default-k8s-diff-port-697869"...
	I0826 12:04:24.667069  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:04:24.668506  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Stop
	I0826 12:04:24.671948  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 0/120
	I0826 12:04:25.673662  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 1/120
	I0826 12:04:26.675189  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 2/120
	I0826 12:04:27.676674  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 3/120
	I0826 12:04:28.678197  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 4/120
	I0826 12:04:29.680932  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 5/120
	I0826 12:04:30.682293  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 6/120
	I0826 12:04:31.683732  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 7/120
	I0826 12:04:32.685215  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 8/120
	I0826 12:04:33.686626  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 9/120
	I0826 12:04:34.688146  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 10/120
	I0826 12:04:35.689698  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 11/120
	I0826 12:04:36.691854  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 12/120
	I0826 12:04:37.693466  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 13/120
	I0826 12:04:38.695026  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 14/120
	I0826 12:04:39.697750  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 15/120
	I0826 12:04:40.699210  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 16/120
	I0826 12:04:41.700648  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 17/120
	I0826 12:04:42.702200  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 18/120
	I0826 12:04:43.703615  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 19/120
	I0826 12:04:44.705210  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 20/120
	I0826 12:04:45.706977  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 21/120
	I0826 12:04:46.708499  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 22/120
	I0826 12:04:47.710077  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 23/120
	I0826 12:04:48.711631  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 24/120
	I0826 12:04:49.713825  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 25/120
	I0826 12:04:50.715497  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 26/120
	I0826 12:04:51.716911  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 27/120
	I0826 12:04:52.718291  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 28/120
	I0826 12:04:53.719944  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 29/120
	I0826 12:04:54.722214  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 30/120
	I0826 12:04:55.723896  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 31/120
	I0826 12:04:56.725240  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 32/120
	I0826 12:04:57.726708  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 33/120
	I0826 12:04:58.728499  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 34/120
	I0826 12:04:59.730815  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 35/120
	I0826 12:05:00.732368  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 36/120
	I0826 12:05:01.733794  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 37/120
	I0826 12:05:02.735347  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 38/120
	I0826 12:05:03.736715  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 39/120
	I0826 12:05:04.739060  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 40/120
	I0826 12:05:05.740725  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 41/120
	I0826 12:05:06.742210  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 42/120
	I0826 12:05:07.743574  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 43/120
	I0826 12:05:08.745151  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 44/120
	I0826 12:05:09.747526  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 45/120
	I0826 12:05:10.748982  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 46/120
	I0826 12:05:11.750487  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 47/120
	I0826 12:05:12.751969  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 48/120
	I0826 12:05:13.753428  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 49/120
	I0826 12:05:14.754845  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 50/120
	I0826 12:05:15.756715  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 51/120
	I0826 12:05:16.758199  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 52/120
	I0826 12:05:17.759754  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 53/120
	I0826 12:05:18.761301  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 54/120
	I0826 12:05:19.763506  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 55/120
	I0826 12:05:20.765355  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 56/120
	I0826 12:05:21.766880  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 57/120
	I0826 12:05:22.768358  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 58/120
	I0826 12:05:23.769813  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 59/120
	I0826 12:05:24.771283  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 60/120
	I0826 12:05:25.772689  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 61/120
	I0826 12:05:26.774167  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 62/120
	I0826 12:05:27.775748  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 63/120
	I0826 12:05:28.777299  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 64/120
	I0826 12:05:29.779650  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 65/120
	I0826 12:05:30.781273  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 66/120
	I0826 12:05:31.783097  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 67/120
	I0826 12:05:32.785439  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 68/120
	I0826 12:05:33.786768  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 69/120
	I0826 12:05:34.789259  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 70/120
	I0826 12:05:35.790678  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 71/120
	I0826 12:05:36.792674  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 72/120
	I0826 12:05:37.794035  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 73/120
	I0826 12:05:38.795544  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 74/120
	I0826 12:05:39.797890  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 75/120
	I0826 12:05:40.799573  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 76/120
	I0826 12:05:41.801341  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 77/120
	I0826 12:05:42.802821  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 78/120
	I0826 12:05:43.804370  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 79/120
	I0826 12:05:44.806652  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 80/120
	I0826 12:05:45.808392  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 81/120
	I0826 12:05:46.810048  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 82/120
	I0826 12:05:47.811707  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 83/120
	I0826 12:05:48.813207  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 84/120
	I0826 12:05:49.815799  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 85/120
	I0826 12:05:50.817505  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 86/120
	I0826 12:05:51.819422  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 87/120
	I0826 12:05:52.821407  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 88/120
	I0826 12:05:53.822964  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 89/120
	I0826 12:05:54.825819  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 90/120
	I0826 12:05:55.827610  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 91/120
	I0826 12:05:56.829316  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 92/120
	I0826 12:05:57.831216  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 93/120
	I0826 12:05:58.832900  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 94/120
	I0826 12:05:59.835162  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 95/120
	I0826 12:06:00.836552  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 96/120
	I0826 12:06:01.837965  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 97/120
	I0826 12:06:02.839414  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 98/120
	I0826 12:06:03.840894  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 99/120
	I0826 12:06:04.842294  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 100/120
	I0826 12:06:05.843707  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 101/120
	I0826 12:06:06.845156  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 102/120
	I0826 12:06:07.846592  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 103/120
	I0826 12:06:08.848102  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 104/120
	I0826 12:06:09.850472  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 105/120
	I0826 12:06:10.851934  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 106/120
	I0826 12:06:11.854000  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 107/120
	I0826 12:06:12.855611  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 108/120
	I0826 12:06:13.857127  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 109/120
	I0826 12:06:14.859879  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 110/120
	I0826 12:06:15.861333  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 111/120
	I0826 12:06:16.862719  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 112/120
	I0826 12:06:17.864338  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 113/120
	I0826 12:06:18.865813  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 114/120
	I0826 12:06:19.868169  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 115/120
	I0826 12:06:20.869692  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 116/120
	I0826 12:06:21.871299  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 117/120
	I0826 12:06:22.872968  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 118/120
	I0826 12:06:23.874386  152317 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for machine to stop 119/120
	I0826 12:06:24.875905  152317 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0826 12:06:24.875991  152317 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0826 12:06:24.878334  152317 out.go:201] 
	W0826 12:06:24.879808  152317 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0826 12:06:24.879828  152317 out.go:270] * 
	* 
	W0826 12:06:24.882664  152317 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:06:24.884046  152317 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-697869 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869: exit status 3 (18.490798333s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:06:43.375261  153147 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host
	E0826 12:06:43.375288  153147 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-697869" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586: exit status 3 (3.168867445s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:04:28.559247  152351 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0826 12:04:28.559272  152351 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-923586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-923586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152413322s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-923586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586: exit status 3 (3.0626251s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:04:37.775357  152469 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0826 12:04:37.775381  152469 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-923586" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (740.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-839656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0826 12:05:57.400752  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-839656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m16.982884579s)

                                                
                                                
-- stdout --
	* [old-k8s-version-839656] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-839656" primary control-plane node in "old-k8s-version-839656" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-839656" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 12:05:41.709633  152982 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:05:41.709891  152982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:05:41.709900  152982 out.go:358] Setting ErrFile to fd 2...
	I0826 12:05:41.709904  152982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:05:41.710134  152982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:05:41.710693  152982 out.go:352] Setting JSON to false
	I0826 12:05:41.711701  152982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6483,"bootTime":1724667459,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:05:41.711767  152982 start.go:139] virtualization: kvm guest
	I0826 12:05:41.714069  152982 out.go:177] * [old-k8s-version-839656] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:05:41.715752  152982 notify.go:220] Checking for updates...
	I0826 12:05:41.715785  152982 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:05:41.717419  152982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:05:41.718943  152982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:05:41.720298  152982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:05:41.721944  152982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:05:41.723888  152982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:05:41.725620  152982 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:05:41.726078  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:05:41.726154  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:05:41.741770  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0826 12:05:41.742286  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:05:41.743007  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:05:41.743032  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:05:41.743418  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:05:41.743631  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:05:41.746104  152982 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0826 12:05:41.747610  152982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:05:41.747958  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:05:41.748010  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:05:41.763734  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37659
	I0826 12:05:41.764136  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:05:41.764684  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:05:41.764730  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:05:41.765112  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:05:41.765336  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:05:41.803668  152982 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:05:41.805133  152982 start.go:297] selected driver: kvm2
	I0826 12:05:41.805158  152982 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:05:41.805268  152982 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:05:41.806014  152982 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:05:41.806104  152982 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:05:41.822669  152982 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:05:41.823116  152982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:05:41.823180  152982 cni.go:84] Creating CNI manager for ""
	I0826 12:05:41.823193  152982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:05:41.823231  152982 start.go:340] cluster config:
	{Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:05:41.823338  152982 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:05:41.825355  152982 out.go:177] * Starting "old-k8s-version-839656" primary control-plane node in "old-k8s-version-839656" cluster
	I0826 12:05:41.826907  152982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 12:05:41.826956  152982 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:05:41.826966  152982 cache.go:56] Caching tarball of preloaded images
	I0826 12:05:41.827060  152982 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:05:41.827072  152982 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0826 12:05:41.827183  152982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 12:05:41.827369  152982 start.go:360] acquireMachinesLock for old-k8s-version-839656: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:09:32.191598  152982 start.go:364] duration metric: took 3m50.364189217s to acquireMachinesLock for "old-k8s-version-839656"
	I0826 12:09:32.191690  152982 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:32.191702  152982 fix.go:54] fixHost starting: 
	I0826 12:09:32.192120  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:32.192160  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:32.209470  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0826 12:09:32.209924  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:32.210423  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:09:32.210446  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:32.210781  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:32.210982  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:32.211153  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetState
	I0826 12:09:32.212801  152982 fix.go:112] recreateIfNeeded on old-k8s-version-839656: state=Stopped err=<nil>
	I0826 12:09:32.212839  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	W0826 12:09:32.213022  152982 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:32.215081  152982 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-839656" ...
	I0826 12:09:32.216396  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .Start
	I0826 12:09:32.216630  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring networks are active...
	I0826 12:09:32.217414  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network default is active
	I0826 12:09:32.217851  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network mk-old-k8s-version-839656 is active
	I0826 12:09:32.218286  152982 main.go:141] libmachine: (old-k8s-version-839656) Getting domain xml...
	I0826 12:09:32.219128  152982 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 12:09:33.500501  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting to get IP...
	I0826 12:09:33.501678  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.502100  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.502202  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.502072  154009 retry.go:31] will retry after 193.282008ms: waiting for machine to come up
	I0826 12:09:33.697223  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.697688  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.697760  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.697669  154009 retry.go:31] will retry after 252.110347ms: waiting for machine to come up
	I0826 12:09:33.951330  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.952639  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.952677  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.952616  154009 retry.go:31] will retry after 436.954293ms: waiting for machine to come up
	I0826 12:09:34.391109  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.391724  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.391759  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.391676  154009 retry.go:31] will retry after 402.13367ms: waiting for machine to come up
	I0826 12:09:34.795471  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.796036  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.796060  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.795991  154009 retry.go:31] will retry after 738.867168ms: waiting for machine to come up
	I0826 12:09:35.537041  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:35.537518  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:35.537539  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:35.537476  154009 retry.go:31] will retry after 884.001928ms: waiting for machine to come up
	I0826 12:09:36.423984  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:36.424400  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:36.424432  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:36.424336  154009 retry.go:31] will retry after 958.887984ms: waiting for machine to come up
	I0826 12:09:37.385261  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:37.385737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:37.385767  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:37.385679  154009 retry.go:31] will retry after 991.322442ms: waiting for machine to come up
	I0826 12:09:38.379002  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:38.379428  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:38.379457  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:38.379382  154009 retry.go:31] will retry after 1.199531339s: waiting for machine to come up
	I0826 12:09:39.581068  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:39.581551  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:39.581581  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:39.581506  154009 retry.go:31] will retry after 1.74680502s: waiting for machine to come up
	I0826 12:09:41.330775  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:41.331224  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:41.331254  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:41.331170  154009 retry.go:31] will retry after 2.648889988s: waiting for machine to come up
	I0826 12:09:43.982234  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:43.982681  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:43.982714  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:43.982593  154009 retry.go:31] will retry after 2.916473093s: waiting for machine to come up
	I0826 12:09:46.902687  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:46.903209  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:46.903243  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:46.903150  154009 retry.go:31] will retry after 4.06528556s: waiting for machine to come up
	I0826 12:09:50.972745  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973257  152982 main.go:141] libmachine: (old-k8s-version-839656) Found IP for machine: 192.168.72.136
	I0826 12:09:50.973280  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserving static IP address...
	I0826 12:09:50.973297  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has current primary IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.973653  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | skip adding static IP to network mk-old-k8s-version-839656 - found existing host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"}
	I0826 12:09:50.973672  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserved static IP address: 192.168.72.136
	I0826 12:09:50.973693  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting for SSH to be available...
	I0826 12:09:50.973737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Getting to WaitForSSH function...
	I0826 12:09:50.976028  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976406  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.976438  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976544  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH client type: external
	I0826 12:09:50.976598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa (-rw-------)
	I0826 12:09:50.976622  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:50.976632  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | About to run SSH command:
	I0826 12:09:50.976642  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | exit 0
	I0826 12:09:51.107476  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:51.107964  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 12:09:51.108748  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.111740  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112251  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.112281  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112613  152982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 12:09:51.112820  152982 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:51.112842  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.113094  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.115616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116011  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.116042  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116213  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.116382  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116483  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116618  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.116815  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.117105  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.117120  152982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:51.219189  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:51.219220  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219528  152982 buildroot.go:166] provisioning hostname "old-k8s-version-839656"
	I0826 12:09:51.219558  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219798  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.222773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223300  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.223337  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223511  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.223750  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.223975  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.224156  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.224364  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.224610  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.224625  152982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-839656 && echo "old-k8s-version-839656" | sudo tee /etc/hostname
	I0826 12:09:51.340951  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-839656
	
	I0826 12:09:51.340995  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.343773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344119  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.344144  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344312  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.344531  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344731  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344865  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.345037  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.345207  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.345224  152982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-839656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-839656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-839656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:51.456135  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:51.456180  152982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:51.456233  152982 buildroot.go:174] setting up certificates
	I0826 12:09:51.456247  152982 provision.go:84] configureAuth start
	I0826 12:09:51.456263  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.456585  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.459426  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.459852  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.459895  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.460083  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.462404  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462754  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.462788  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462984  152982 provision.go:143] copyHostCerts
	I0826 12:09:51.463042  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:51.463061  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:51.463118  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:51.463225  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:51.463235  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:51.463255  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:51.463306  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:51.463313  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:51.463331  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:51.463381  152982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-839656 san=[127.0.0.1 192.168.72.136 localhost minikube old-k8s-version-839656]
	I0826 12:09:51.533462  152982 provision.go:177] copyRemoteCerts
	I0826 12:09:51.533528  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:51.533556  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.536586  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.536967  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.536991  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.537268  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.537519  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.537729  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.537894  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:51.617503  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:51.642966  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0826 12:09:51.669120  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:51.693595  152982 provision.go:87] duration metric: took 237.331736ms to configureAuth
	I0826 12:09:51.693629  152982 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:51.693808  152982 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:09:51.693895  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.697161  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697508  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.697553  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697789  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.698042  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698207  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698394  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.698565  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.698798  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.698819  152982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:51.959544  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:51.959580  152982 machine.go:96] duration metric: took 846.74482ms to provisionDockerMachine
	I0826 12:09:51.959595  152982 start.go:293] postStartSetup for "old-k8s-version-839656" (driver="kvm2")
	I0826 12:09:51.959606  152982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:51.959628  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.959989  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:51.960024  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.962912  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963278  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.963304  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963520  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.963756  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.963954  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.964082  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.046059  152982 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:52.050013  152982 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:52.050045  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:52.050119  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:52.050225  152982 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:52.050345  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:52.059871  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:52.082494  152982 start.go:296] duration metric: took 122.880191ms for postStartSetup
	I0826 12:09:52.082546  152982 fix.go:56] duration metric: took 19.890844987s for fixHost
	I0826 12:09:52.082576  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.085291  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085726  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.085772  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085898  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.086116  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086307  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086457  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.086659  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:52.086841  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:52.086856  152982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:52.187806  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674192.159623045
	
	I0826 12:09:52.187839  152982 fix.go:216] guest clock: 1724674192.159623045
	I0826 12:09:52.187846  152982 fix.go:229] Guest: 2024-08-26 12:09:52.159623045 +0000 UTC Remote: 2024-08-26 12:09:52.082552402 +0000 UTC m=+250.413281630 (delta=77.070643ms)
	I0826 12:09:52.187868  152982 fix.go:200] guest clock delta is within tolerance: 77.070643ms
	I0826 12:09:52.187873  152982 start.go:83] releasing machines lock for "old-k8s-version-839656", held for 19.996211523s
	I0826 12:09:52.187905  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.188210  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:52.191003  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191480  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.191511  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191670  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192375  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192595  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192733  152982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:52.192794  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.192854  152982 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:52.192883  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.195598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195757  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195965  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.195994  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196172  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196256  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.196290  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196424  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196463  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196624  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196627  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196812  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196842  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.196954  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.304741  152982 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:52.311072  152982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:52.457568  152982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:52.465123  152982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:52.465211  152982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:52.487320  152982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:52.487351  152982 start.go:495] detecting cgroup driver to use...
	I0826 12:09:52.487459  152982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:52.509680  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:52.526517  152982 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:52.526615  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:52.540741  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:52.554819  152982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:52.677611  152982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:52.829605  152982 docker.go:233] disabling docker service ...
	I0826 12:09:52.829706  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:52.844862  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:52.859869  152982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:53.021968  152982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:53.156768  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:53.173028  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:53.194573  152982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 12:09:53.194641  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.204783  152982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:53.204873  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.215395  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.225578  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.235810  152982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:53.246635  152982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:53.257399  152982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:53.257467  152982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:53.273553  152982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:53.283339  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:53.432394  152982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:53.583340  152982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:53.583443  152982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:53.590729  152982 start.go:563] Will wait 60s for crictl version
	I0826 12:09:53.590877  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:53.596292  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:53.656413  152982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:53.656523  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.685569  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.716571  152982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 12:09:53.718104  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:53.721461  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.721900  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:53.721938  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.722137  152982 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:53.726404  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:53.738999  152982 kubeadm.go:883] updating cluster {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:53.739130  152982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 12:09:53.739182  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:53.791456  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:53.791561  152982 ssh_runner.go:195] Run: which lz4
	I0826 12:09:53.795624  152982 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:53.799857  152982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:53.799892  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 12:09:55.402637  152982 crio.go:462] duration metric: took 1.607044522s to copy over tarball
	I0826 12:09:55.402746  152982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:58.462705  152982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059925579s)
	I0826 12:09:58.462738  152982 crio.go:469] duration metric: took 3.060066141s to extract the tarball
	I0826 12:09:58.462748  152982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:58.504763  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:58.547876  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:58.547908  152982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:09:58.548002  152982 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.548020  152982 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.548047  152982 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.548058  152982 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.548025  152982 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.548107  152982 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.548041  152982 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 12:09:58.548004  152982 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550035  152982 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.550050  152982 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.550064  152982 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.550039  152982 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 12:09:58.550090  152982 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550045  152982 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.550125  152982 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.550071  152982 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.785285  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.798866  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 12:09:58.801333  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.803488  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.845454  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.845683  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.851257  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.875512  152982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 12:09:58.875632  152982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.875702  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.899151  152982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 12:09:58.899204  152982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 12:09:58.899268  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.947547  152982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 12:09:58.947602  152982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.947657  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.960126  152982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 12:09:58.960178  152982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.960229  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.978450  152982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 12:09:58.978504  152982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.978571  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.981296  152982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 12:09:58.981335  152982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.981378  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990296  152982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 12:09:58.990341  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.990351  152982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.990398  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990481  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:58.990549  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.990624  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.993238  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.993297  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.117393  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.117394  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.137340  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.137381  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.137396  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.139282  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.140553  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.237314  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.242110  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.293209  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.293288  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.310442  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.316239  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.316345  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.382180  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.382851  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:59.389447  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 12:09:59.454424  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 12:09:59.484709  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 12:09:59.491496  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 12:09:59.491517  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 12:09:59.491555  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 12:09:59.495411  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 12:09:59.614016  152982 cache_images.go:92] duration metric: took 1.066082637s to LoadCachedImages
	W0826 12:09:59.614118  152982 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0826 12:09:59.614133  152982 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.20.0 crio true true} ...
	I0826 12:09:59.614248  152982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-839656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:59.614345  152982 ssh_runner.go:195] Run: crio config
	I0826 12:09:59.661938  152982 cni.go:84] Creating CNI manager for ""
	I0826 12:09:59.661962  152982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:59.661975  152982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:59.661994  152982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-839656 NodeName:old-k8s-version-839656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 12:09:59.662131  152982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-839656"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:59.662212  152982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 12:09:59.672820  152982 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:59.672907  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:59.682949  152982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0826 12:09:59.701705  152982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:59.719839  152982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0826 12:09:59.737712  152982 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:59.741301  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:59.753857  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:59.877969  152982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:59.896278  152982 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656 for IP: 192.168.72.136
	I0826 12:09:59.896306  152982 certs.go:194] generating shared ca certs ...
	I0826 12:09:59.896337  152982 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:59.896522  152982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:59.896620  152982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:59.896640  152982 certs.go:256] generating profile certs ...
	I0826 12:09:59.896769  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key
	I0826 12:09:59.896903  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261
	I0826 12:09:59.896972  152982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key
	I0826 12:09:59.897126  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:59.897165  152982 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:59.897178  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:59.897216  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:59.897261  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:59.897303  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:59.897362  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:59.898051  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:59.938407  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:59.983455  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:00.021803  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:00.058157  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 12:10:00.095920  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:00.133185  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:00.167537  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:00.193940  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:00.220558  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:00.245567  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:00.274758  152982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:00.296741  152982 ssh_runner.go:195] Run: openssl version
	I0826 12:10:00.305185  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:00.321395  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326339  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326422  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.332789  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:00.343971  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:00.355979  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360900  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360985  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.367085  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:00.379942  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:00.391907  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396769  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396845  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.403009  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:00.416262  152982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:00.422105  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:00.428526  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:00.435241  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:00.441902  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:00.448502  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:00.455012  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:00.461390  152982 kubeadm.go:392] StartCluster: {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:00.461533  152982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:00.461596  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.503939  152982 cri.go:89] found id: ""
	I0826 12:10:00.504026  152982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:00.515410  152982 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:00.515434  152982 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:00.515483  152982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:00.527240  152982 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:00.528558  152982 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:10:00.529540  152982 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-839656" cluster setting kubeconfig missing "old-k8s-version-839656" context setting]
	I0826 12:10:00.530977  152982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:00.618477  152982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:00.630233  152982 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
	I0826 12:10:00.630283  152982 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:00.630300  152982 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:00.630367  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.667438  152982 cri.go:89] found id: ""
	I0826 12:10:00.667535  152982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:00.685319  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:00.695968  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:00.696003  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:00.696087  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:00.706519  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:00.706583  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:00.716807  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:00.726555  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:00.726637  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:00.737356  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.747702  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:00.747773  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.758771  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:00.769257  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:00.769345  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:00.780102  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:00.791976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:00.922432  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.146027  152982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223548629s)
	I0826 12:10:02.146087  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.407469  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.511616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.629123  152982 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:02.629250  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.129448  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.629685  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.129759  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.629807  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.129526  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.629782  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.129949  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.630031  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:07.129729  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:07.629445  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.129308  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.629701  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.130082  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.629958  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.129963  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.629747  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.130061  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.630060  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:12.129652  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:12.630076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.129342  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.630081  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.130129  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.629381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.129909  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.630114  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.129784  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.629463  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:17.129856  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:17.629845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.129411  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.629780  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.129980  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.629521  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.129719  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.630349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.130078  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.629658  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:22.130431  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:22.630197  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.129672  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.630044  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.129562  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.629554  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.129334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.630351  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.130136  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.629461  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:27.129634  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:27.629356  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.130029  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.629993  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.130030  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.629424  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.129476  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.630209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.129435  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.630170  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:32.130190  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:32.630331  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.129323  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.629368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.129667  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.629421  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.130330  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.630142  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.130340  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.629400  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:37.130309  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:37.629548  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.129413  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.629384  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.130354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.629474  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.129901  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.629362  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.129862  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.629811  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:42.130334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:42.630068  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.130212  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.629443  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.130067  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.629805  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.129753  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.629806  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.129401  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.630125  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:47.129441  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:47.629637  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.129381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.630027  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.129789  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.630022  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.130252  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.630145  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.129544  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.629646  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:52.129473  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:52.629868  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.129585  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.629893  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.129446  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.629722  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.130173  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.629968  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.129994  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.629422  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:57.129363  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:57.629878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.129406  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.629611  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.130209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.629354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.130004  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.629599  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.129324  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.629623  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:02.129756  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:02.630078  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:02.630168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:02.668634  152982 cri.go:89] found id: ""
	I0826 12:11:02.668665  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.668673  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:02.668680  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:02.668736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:02.707481  152982 cri.go:89] found id: ""
	I0826 12:11:02.707513  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.707524  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:02.707531  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:02.707600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:02.742412  152982 cri.go:89] found id: ""
	I0826 12:11:02.742441  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.742452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:02.742459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:02.742524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:02.783334  152982 cri.go:89] found id: ""
	I0826 12:11:02.783363  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.783374  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:02.783383  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:02.783442  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:02.819550  152982 cri.go:89] found id: ""
	I0826 12:11:02.819578  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.819586  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:02.819592  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:02.819668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:02.857381  152982 cri.go:89] found id: ""
	I0826 12:11:02.857418  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.857429  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:02.857439  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:02.857508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:02.891198  152982 cri.go:89] found id: ""
	I0826 12:11:02.891231  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.891242  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:02.891249  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:02.891328  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:02.925819  152982 cri.go:89] found id: ""
	I0826 12:11:02.925847  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.925856  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:02.925867  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:02.925881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:03.061241  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:03.061287  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:03.061306  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:03.132324  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:03.132364  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:03.176590  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:03.176623  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.229320  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:03.229366  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:05.744686  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:05.758429  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:05.758517  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:05.799162  152982 cri.go:89] found id: ""
	I0826 12:11:05.799200  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.799209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:05.799216  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:05.799270  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:05.839302  152982 cri.go:89] found id: ""
	I0826 12:11:05.839341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.839354  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:05.839362  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:05.839438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:05.900064  152982 cri.go:89] found id: ""
	I0826 12:11:05.900094  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.900102  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:05.900108  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:05.900168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:05.938314  152982 cri.go:89] found id: ""
	I0826 12:11:05.938341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.938350  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:05.938356  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:05.938423  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:05.975644  152982 cri.go:89] found id: ""
	I0826 12:11:05.975679  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.975691  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:05.975699  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:05.975775  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:06.012720  152982 cri.go:89] found id: ""
	I0826 12:11:06.012752  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.012764  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:06.012772  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:06.012848  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:06.048613  152982 cri.go:89] found id: ""
	I0826 12:11:06.048648  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.048656  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:06.048662  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:06.048717  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:06.083136  152982 cri.go:89] found id: ""
	I0826 12:11:06.083171  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.083183  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:06.083195  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:06.083213  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:06.096570  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:06.096616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:06.172561  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:06.172588  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:06.172605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:06.252039  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:06.252081  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:06.291076  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:06.291109  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:08.838693  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:08.853160  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:08.853246  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:08.893024  152982 cri.go:89] found id: ""
	I0826 12:11:08.893058  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.893072  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:08.893083  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:08.893157  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:08.929621  152982 cri.go:89] found id: ""
	I0826 12:11:08.929660  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.929669  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:08.929675  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:08.929744  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:08.965488  152982 cri.go:89] found id: ""
	I0826 12:11:08.965526  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.965541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:08.965550  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:08.965622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:09.001467  152982 cri.go:89] found id: ""
	I0826 12:11:09.001503  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.001515  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:09.001525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:09.001587  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:09.037865  152982 cri.go:89] found id: ""
	I0826 12:11:09.037898  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.037907  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:09.037914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:09.037973  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:09.074537  152982 cri.go:89] found id: ""
	I0826 12:11:09.074571  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.074582  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:09.074591  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:09.074665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:09.111517  152982 cri.go:89] found id: ""
	I0826 12:11:09.111550  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.111561  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:09.111569  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:09.111635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:09.151005  152982 cri.go:89] found id: ""
	I0826 12:11:09.151039  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.151050  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:09.151062  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:09.151079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:09.231625  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:09.231666  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:09.277642  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:09.277685  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:09.326772  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:09.326814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:09.341764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:09.341802  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:09.419087  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:11.920246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:11.933973  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:11.934070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:11.971020  152982 cri.go:89] found id: ""
	I0826 12:11:11.971055  152982 logs.go:276] 0 containers: []
	W0826 12:11:11.971067  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:11.971076  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:11.971147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:12.005639  152982 cri.go:89] found id: ""
	I0826 12:11:12.005669  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.005679  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:12.005687  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:12.005757  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:12.039823  152982 cri.go:89] found id: ""
	I0826 12:11:12.039856  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.039868  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:12.039877  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:12.039954  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:12.075646  152982 cri.go:89] found id: ""
	I0826 12:11:12.075690  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.075702  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:12.075710  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:12.075814  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:12.113810  152982 cri.go:89] found id: ""
	I0826 12:11:12.113838  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.113846  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:12.113852  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:12.113927  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:12.150373  152982 cri.go:89] found id: ""
	I0826 12:11:12.150405  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.150415  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:12.150421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:12.150478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:12.186325  152982 cri.go:89] found id: ""
	I0826 12:11:12.186362  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.186373  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:12.186381  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:12.186444  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:12.221346  152982 cri.go:89] found id: ""
	I0826 12:11:12.221380  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.221392  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:12.221405  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:12.221423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:12.279964  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:12.280006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:12.297102  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:12.297134  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:12.391568  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:12.391593  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:12.391608  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:12.472218  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:12.472259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.012974  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:15.026480  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:15.026553  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:15.060748  152982 cri.go:89] found id: ""
	I0826 12:11:15.060779  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.060787  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:15.060792  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:15.060842  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:15.095611  152982 cri.go:89] found id: ""
	I0826 12:11:15.095644  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.095668  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:15.095683  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:15.095759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:15.130644  152982 cri.go:89] found id: ""
	I0826 12:11:15.130681  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.130692  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:15.130700  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:15.130773  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:15.164343  152982 cri.go:89] found id: ""
	I0826 12:11:15.164375  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.164383  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:15.164391  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:15.164468  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:15.203801  152982 cri.go:89] found id: ""
	I0826 12:11:15.203835  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.203847  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:15.203855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:15.203935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:15.236428  152982 cri.go:89] found id: ""
	I0826 12:11:15.236455  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.236465  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:15.236474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:15.236546  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:15.271307  152982 cri.go:89] found id: ""
	I0826 12:11:15.271345  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.271357  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:15.271365  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:15.271449  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:15.306164  152982 cri.go:89] found id: ""
	I0826 12:11:15.306194  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.306203  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:15.306214  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:15.306228  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:15.319277  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:15.319311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:15.389821  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:15.389853  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:15.389874  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:15.466002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:15.466045  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.506591  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:15.506626  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:18.061033  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:18.084401  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:18.084478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:18.127327  152982 cri.go:89] found id: ""
	I0826 12:11:18.127360  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.127371  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:18.127380  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:18.127451  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:18.163215  152982 cri.go:89] found id: ""
	I0826 12:11:18.163249  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.163261  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:18.163270  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:18.163330  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:18.198205  152982 cri.go:89] found id: ""
	I0826 12:11:18.198232  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.198241  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:18.198250  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:18.198322  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:18.233245  152982 cri.go:89] found id: ""
	I0826 12:11:18.233279  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.233291  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:18.233299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:18.233366  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:18.266761  152982 cri.go:89] found id: ""
	I0826 12:11:18.266802  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.266825  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:18.266855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:18.266932  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:18.301705  152982 cri.go:89] found id: ""
	I0826 12:11:18.301744  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.301755  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:18.301764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:18.301825  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:18.339384  152982 cri.go:89] found id: ""
	I0826 12:11:18.339413  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.339422  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:18.339428  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:18.339486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:18.374435  152982 cri.go:89] found id: ""
	I0826 12:11:18.374467  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.374475  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:18.374485  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:18.374498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:18.414453  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:18.414506  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:18.468667  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:18.468712  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:18.483366  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:18.483399  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:18.554900  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:18.554930  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:18.554948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.135828  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:21.148610  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:21.148690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:21.184455  152982 cri.go:89] found id: ""
	I0826 12:11:21.184484  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.184494  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:21.184503  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:21.184572  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:21.219762  152982 cri.go:89] found id: ""
	I0826 12:11:21.219808  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.219821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:21.219829  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:21.219914  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:21.258106  152982 cri.go:89] found id: ""
	I0826 12:11:21.258136  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.258147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:21.258154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:21.258221  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:21.293698  152982 cri.go:89] found id: ""
	I0826 12:11:21.293741  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.293753  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:21.293764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:21.293841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:21.328069  152982 cri.go:89] found id: ""
	I0826 12:11:21.328101  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.328115  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:21.328123  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:21.328191  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:21.363723  152982 cri.go:89] found id: ""
	I0826 12:11:21.363757  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.363767  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:21.363776  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:21.363843  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:21.398321  152982 cri.go:89] found id: ""
	I0826 12:11:21.398349  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.398358  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:21.398364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:21.398428  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:21.434139  152982 cri.go:89] found id: ""
	I0826 12:11:21.434169  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.434177  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:21.434189  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:21.434211  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:21.488855  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:21.488900  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:21.503146  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:21.503186  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:21.576190  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:21.576212  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:21.576226  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.660280  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:21.660330  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:24.205285  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:24.219929  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:24.220056  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:24.263032  152982 cri.go:89] found id: ""
	I0826 12:11:24.263064  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.263076  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:24.263084  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:24.263154  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:24.301435  152982 cri.go:89] found id: ""
	I0826 12:11:24.301469  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.301479  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:24.301486  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:24.301557  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:24.337463  152982 cri.go:89] found id: ""
	I0826 12:11:24.337494  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.337505  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:24.337513  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:24.337589  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:24.375142  152982 cri.go:89] found id: ""
	I0826 12:11:24.375181  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.375192  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:24.375201  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:24.375277  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:24.414859  152982 cri.go:89] found id: ""
	I0826 12:11:24.414891  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.414902  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:24.414910  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:24.414980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:24.453757  152982 cri.go:89] found id: ""
	I0826 12:11:24.453801  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.453826  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:24.453836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:24.453936  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:24.489558  152982 cri.go:89] found id: ""
	I0826 12:11:24.489592  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.489601  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:24.489606  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:24.489659  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:24.525054  152982 cri.go:89] found id: ""
	I0826 12:11:24.525086  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.525097  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:24.525109  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:24.525131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:24.596120  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:24.596147  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:24.596162  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:24.671993  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:24.672040  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:24.714108  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:24.714139  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:24.764937  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:24.764979  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:27.280105  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:27.293479  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:27.293569  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:27.335432  152982 cri.go:89] found id: ""
	I0826 12:11:27.335464  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.335477  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:27.335485  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:27.335565  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:27.371729  152982 cri.go:89] found id: ""
	I0826 12:11:27.371763  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.371774  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:27.371783  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:27.371857  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:27.408210  152982 cri.go:89] found id: ""
	I0826 12:11:27.408238  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.408250  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:27.408258  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:27.408327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:27.442135  152982 cri.go:89] found id: ""
	I0826 12:11:27.442170  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.442186  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:27.442196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:27.442266  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:27.476245  152982 cri.go:89] found id: ""
	I0826 12:11:27.476279  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.476290  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:27.476299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:27.476421  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:27.510917  152982 cri.go:89] found id: ""
	I0826 12:11:27.510949  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.510958  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:27.510965  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:27.511033  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:27.552891  152982 cri.go:89] found id: ""
	I0826 12:11:27.552925  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.552933  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:27.552939  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:27.552996  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:27.588303  152982 cri.go:89] found id: ""
	I0826 12:11:27.588339  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.588354  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:27.588365  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:27.588383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:27.666493  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:27.666540  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:27.710139  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:27.710176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:27.761327  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:27.761368  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:27.775628  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:27.775667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:27.851736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.351953  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:30.365614  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:30.365705  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:30.400100  152982 cri.go:89] found id: ""
	I0826 12:11:30.400130  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.400140  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:30.400148  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:30.400224  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:30.433714  152982 cri.go:89] found id: ""
	I0826 12:11:30.433746  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.433762  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:30.433770  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:30.433841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:30.467434  152982 cri.go:89] found id: ""
	I0826 12:11:30.467465  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.467475  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:30.467482  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:30.467549  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:30.501079  152982 cri.go:89] found id: ""
	I0826 12:11:30.501115  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.501128  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:30.501136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:30.501195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:30.536521  152982 cri.go:89] found id: ""
	I0826 12:11:30.536556  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.536568  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:30.536576  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:30.536649  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:30.572998  152982 cri.go:89] found id: ""
	I0826 12:11:30.573030  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.573040  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:30.573048  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:30.573116  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:30.608982  152982 cri.go:89] found id: ""
	I0826 12:11:30.609017  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.609028  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:30.609035  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:30.609110  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:30.648780  152982 cri.go:89] found id: ""
	I0826 12:11:30.648812  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.648824  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:30.648837  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:30.648853  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:30.705822  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:30.705859  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:30.719927  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:30.719956  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:30.799604  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.799633  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:30.799650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:30.876392  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:30.876438  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:33.417878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:33.431323  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:33.431416  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:33.466166  152982 cri.go:89] found id: ""
	I0826 12:11:33.466195  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.466204  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:33.466215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:33.466292  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:33.504322  152982 cri.go:89] found id: ""
	I0826 12:11:33.504351  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.504360  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:33.504367  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:33.504429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:33.542292  152982 cri.go:89] found id: ""
	I0826 12:11:33.542324  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.542332  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:33.542340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:33.542408  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:33.577794  152982 cri.go:89] found id: ""
	I0826 12:11:33.577827  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.577835  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:33.577841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:33.577901  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:33.611525  152982 cri.go:89] found id: ""
	I0826 12:11:33.611561  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.611571  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:33.611580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:33.611661  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:33.650920  152982 cri.go:89] found id: ""
	I0826 12:11:33.650954  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.650966  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:33.650974  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:33.651043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:33.688349  152982 cri.go:89] found id: ""
	I0826 12:11:33.688389  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.688401  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:33.688409  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:33.688479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:33.726501  152982 cri.go:89] found id: ""
	I0826 12:11:33.726533  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.726542  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:33.726553  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:33.726570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:33.740359  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:33.740392  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:33.810992  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:33.811018  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:33.811030  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:33.895742  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:33.895786  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:33.934059  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:33.934090  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.490917  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:36.503916  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:36.504000  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:36.539493  152982 cri.go:89] found id: ""
	I0826 12:11:36.539521  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.539529  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:36.539535  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:36.539597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:36.575517  152982 cri.go:89] found id: ""
	I0826 12:11:36.575556  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.575567  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:36.575576  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:36.575647  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:36.611750  152982 cri.go:89] found id: ""
	I0826 12:11:36.611790  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.611803  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:36.611812  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:36.611880  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:36.649512  152982 cri.go:89] found id: ""
	I0826 12:11:36.649548  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.649561  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:36.649575  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:36.649656  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:36.686741  152982 cri.go:89] found id: ""
	I0826 12:11:36.686774  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.686784  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:36.686791  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:36.686879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:36.723395  152982 cri.go:89] found id: ""
	I0826 12:11:36.723423  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.723431  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:36.723438  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:36.723503  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:36.761858  152982 cri.go:89] found id: ""
	I0826 12:11:36.761895  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.761906  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:36.761914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:36.761987  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:36.797265  152982 cri.go:89] found id: ""
	I0826 12:11:36.797297  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.797305  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:36.797315  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:36.797331  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.849263  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:36.849313  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:36.863273  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:36.863305  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:36.935214  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:36.935241  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:36.935259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:37.011799  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:37.011845  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.550075  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:39.563363  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:39.563441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:39.597015  152982 cri.go:89] found id: ""
	I0826 12:11:39.597049  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.597061  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:39.597068  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:39.597138  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:39.634936  152982 cri.go:89] found id: ""
	I0826 12:11:39.634976  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.634988  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:39.634996  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:39.635070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:39.670376  152982 cri.go:89] found id: ""
	I0826 12:11:39.670406  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.670414  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:39.670421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:39.670479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:39.706468  152982 cri.go:89] found id: ""
	I0826 12:11:39.706497  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.706504  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:39.706510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:39.706601  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:39.741133  152982 cri.go:89] found id: ""
	I0826 12:11:39.741166  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.741178  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:39.741187  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:39.741261  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:39.776398  152982 cri.go:89] found id: ""
	I0826 12:11:39.776436  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.776449  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:39.776460  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:39.776533  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:39.811257  152982 cri.go:89] found id: ""
	I0826 12:11:39.811291  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.811305  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:39.811314  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:39.811394  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:39.845825  152982 cri.go:89] found id: ""
	I0826 12:11:39.845858  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.845880  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:39.845893  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:39.845912  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.886439  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:39.886481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:39.936942  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:39.936985  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:39.950459  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:39.950494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:40.022791  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:40.022820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:40.022851  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:42.602146  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:42.615049  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:42.615124  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:42.655338  152982 cri.go:89] found id: ""
	I0826 12:11:42.655369  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.655377  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:42.655383  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:42.655438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:42.692964  152982 cri.go:89] found id: ""
	I0826 12:11:42.693001  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.693012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:42.693020  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:42.693095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:42.730011  152982 cri.go:89] found id: ""
	I0826 12:11:42.730040  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.730049  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:42.730055  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:42.730119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:42.765304  152982 cri.go:89] found id: ""
	I0826 12:11:42.765333  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.765341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:42.765348  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:42.765406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:42.805860  152982 cri.go:89] found id: ""
	I0826 12:11:42.805900  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.805912  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:42.805921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:42.805984  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:42.844736  152982 cri.go:89] found id: ""
	I0826 12:11:42.844768  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.844779  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:42.844789  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:42.844855  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:42.879760  152982 cri.go:89] found id: ""
	I0826 12:11:42.879790  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.879801  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:42.879809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:42.879873  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:42.918512  152982 cri.go:89] found id: ""
	I0826 12:11:42.918580  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.918595  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:42.918619  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:42.918640  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:42.971381  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:42.971423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:42.986027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:42.986069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:43.058511  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:43.058533  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:43.058548  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:43.137904  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:43.137948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:45.683127  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:45.697237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:45.697323  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:45.737944  152982 cri.go:89] found id: ""
	I0826 12:11:45.737977  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.737989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:45.737997  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:45.738069  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:45.775940  152982 cri.go:89] found id: ""
	I0826 12:11:45.775972  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.775980  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:45.775991  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:45.776047  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:45.811609  152982 cri.go:89] found id: ""
	I0826 12:11:45.811647  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.811658  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:45.811666  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:45.811747  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:45.845566  152982 cri.go:89] found id: ""
	I0826 12:11:45.845600  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.845612  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:45.845620  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:45.845698  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:45.880243  152982 cri.go:89] found id: ""
	I0826 12:11:45.880287  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.880300  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:45.880310  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:45.880406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:45.916121  152982 cri.go:89] found id: ""
	I0826 12:11:45.916150  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.916161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:45.916170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:45.916242  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:45.950397  152982 cri.go:89] found id: ""
	I0826 12:11:45.950430  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.950441  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:45.950449  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:45.950524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:45.987306  152982 cri.go:89] found id: ""
	I0826 12:11:45.987350  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.987363  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:45.987394  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:45.987435  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:46.044580  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:46.044632  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:46.059612  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:46.059648  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:46.133348  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:46.133377  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:46.133396  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:46.217841  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:46.217890  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:48.758749  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:48.772086  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:48.772172  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:48.806520  152982 cri.go:89] found id: ""
	I0826 12:11:48.806552  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.806563  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:48.806573  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:48.806655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:48.844305  152982 cri.go:89] found id: ""
	I0826 12:11:48.844335  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.844343  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:48.844349  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:48.844409  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:48.882416  152982 cri.go:89] found id: ""
	I0826 12:11:48.882453  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.882462  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:48.882469  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:48.882523  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:48.917756  152982 cri.go:89] found id: ""
	I0826 12:11:48.917798  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.917811  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:48.917818  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:48.917882  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:48.951065  152982 cri.go:89] found id: ""
	I0826 12:11:48.951095  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.951107  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:48.951115  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:48.951185  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:48.984812  152982 cri.go:89] found id: ""
	I0826 12:11:48.984845  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.984857  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:48.984865  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:48.984935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:49.021449  152982 cri.go:89] found id: ""
	I0826 12:11:49.021483  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.021495  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:49.021505  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:49.021579  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:49.053543  152982 cri.go:89] found id: ""
	I0826 12:11:49.053584  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.053596  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:49.053609  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:49.053625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:49.107227  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:49.107269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:49.121370  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:49.121402  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:49.192279  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:49.192323  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:49.192342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:49.267817  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:49.267861  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:51.805801  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:51.821042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:51.821119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:51.863950  152982 cri.go:89] found id: ""
	I0826 12:11:51.863986  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.863999  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:51.864007  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:51.864082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:51.910582  152982 cri.go:89] found id: ""
	I0826 12:11:51.910621  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.910633  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:51.910649  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:51.910708  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:51.946964  152982 cri.go:89] found id: ""
	I0826 12:11:51.947001  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.947014  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:51.947022  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:51.947095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:51.982892  152982 cri.go:89] found id: ""
	I0826 12:11:51.982926  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.982936  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:51.982944  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:51.983016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:52.017975  152982 cri.go:89] found id: ""
	I0826 12:11:52.018000  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.018009  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:52.018015  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:52.018082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:52.053286  152982 cri.go:89] found id: ""
	I0826 12:11:52.053315  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.053323  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:52.053329  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:52.053391  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:52.088088  152982 cri.go:89] found id: ""
	I0826 12:11:52.088131  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.088144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:52.088153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:52.088235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:52.125911  152982 cri.go:89] found id: ""
	I0826 12:11:52.125938  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.125955  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:52.125967  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:52.125984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:52.167172  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:52.167208  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:52.222819  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:52.222871  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:52.237609  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:52.237650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:52.312439  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:52.312473  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:52.312491  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:54.892552  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:54.907733  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:54.907827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:54.945009  152982 cri.go:89] found id: ""
	I0826 12:11:54.945040  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.945050  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:54.945057  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:54.945128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:54.987578  152982 cri.go:89] found id: ""
	I0826 12:11:54.987608  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.987619  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:54.987627  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:54.987702  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:55.021222  152982 cri.go:89] found id: ""
	I0826 12:11:55.021254  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.021266  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:55.021274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:55.021348  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:55.058906  152982 cri.go:89] found id: ""
	I0826 12:11:55.058933  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.058941  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:55.058948  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:55.059017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:55.094689  152982 cri.go:89] found id: ""
	I0826 12:11:55.094720  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.094727  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:55.094734  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:55.094808  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:55.133269  152982 cri.go:89] found id: ""
	I0826 12:11:55.133298  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.133306  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:55.133313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:55.133376  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:55.170456  152982 cri.go:89] found id: ""
	I0826 12:11:55.170491  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.170501  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:55.170510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:55.170584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:55.205421  152982 cri.go:89] found id: ""
	I0826 12:11:55.205453  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.205463  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:55.205474  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:55.205490  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:55.258635  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:55.258672  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:55.272799  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:55.272838  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:55.345916  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:55.345948  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:55.345966  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:55.421677  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:55.421716  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:57.960895  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:57.974338  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:57.974429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:58.010914  152982 cri.go:89] found id: ""
	I0826 12:11:58.010946  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.010955  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:58.010966  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:58.011046  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:58.046393  152982 cri.go:89] found id: ""
	I0826 12:11:58.046437  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.046451  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:58.046457  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:58.046512  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:58.081967  152982 cri.go:89] found id: ""
	I0826 12:11:58.081999  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.082008  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:58.082014  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:58.082074  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:58.118301  152982 cri.go:89] found id: ""
	I0826 12:11:58.118333  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.118344  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:58.118352  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:58.118420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:58.154991  152982 cri.go:89] found id: ""
	I0826 12:11:58.155022  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.155030  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:58.155036  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:58.155095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:58.192768  152982 cri.go:89] found id: ""
	I0826 12:11:58.192814  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.192827  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:58.192836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:58.192911  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:58.230393  152982 cri.go:89] found id: ""
	I0826 12:11:58.230422  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.230433  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:58.230441  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:58.230510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:58.267899  152982 cri.go:89] found id: ""
	I0826 12:11:58.267935  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.267947  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:58.267959  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:58.267976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:58.357819  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:58.357866  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:58.405641  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:58.405682  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:58.458403  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:58.458446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:58.472170  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:58.472209  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:58.544141  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.044595  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:01.059636  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:01.059732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:01.099210  152982 cri.go:89] found id: ""
	I0826 12:12:01.099244  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.099252  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:01.099260  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:01.099315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:01.135865  152982 cri.go:89] found id: ""
	I0826 12:12:01.135895  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.135904  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:01.135915  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:01.135969  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:01.169745  152982 cri.go:89] found id: ""
	I0826 12:12:01.169775  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.169784  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:01.169790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:01.169844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:01.208386  152982 cri.go:89] found id: ""
	I0826 12:12:01.208419  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.208431  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:01.208440  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:01.208508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:01.250695  152982 cri.go:89] found id: ""
	I0826 12:12:01.250727  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.250738  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:01.250746  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:01.250821  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:01.284796  152982 cri.go:89] found id: ""
	I0826 12:12:01.284825  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.284838  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:01.284845  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:01.284904  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:01.318188  152982 cri.go:89] found id: ""
	I0826 12:12:01.318219  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.318233  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:01.318242  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:01.318313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:01.354986  152982 cri.go:89] found id: ""
	I0826 12:12:01.355024  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.355036  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:01.355055  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:01.355073  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:01.406575  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:01.406625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:01.421246  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:01.421299  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:01.500127  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.500160  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:01.500178  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:01.579560  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:01.579605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:04.124292  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:04.138317  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:04.138406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:04.172150  152982 cri.go:89] found id: ""
	I0826 12:12:04.172185  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.172197  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:04.172205  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:04.172281  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:04.206215  152982 cri.go:89] found id: ""
	I0826 12:12:04.206245  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.206253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:04.206259  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:04.206314  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:04.245728  152982 cri.go:89] found id: ""
	I0826 12:12:04.245766  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.245780  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:04.245797  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:04.245875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:04.288292  152982 cri.go:89] found id: ""
	I0826 12:12:04.288328  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.288341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:04.288358  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:04.288420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:04.323224  152982 cri.go:89] found id: ""
	I0826 12:12:04.323270  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.323279  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:04.323285  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:04.323353  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:04.356637  152982 cri.go:89] found id: ""
	I0826 12:12:04.356670  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.356681  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:04.356751  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:04.356829  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:04.397159  152982 cri.go:89] found id: ""
	I0826 12:12:04.397202  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.397217  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:04.397225  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:04.397307  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:04.443593  152982 cri.go:89] found id: ""
	I0826 12:12:04.443635  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.443644  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:04.443654  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:04.443667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:04.527790  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:04.527820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:04.527840  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:04.603384  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:04.603426  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:04.642782  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:04.642818  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:04.692196  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:04.692239  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:07.208845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:07.221853  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:07.221925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:07.257184  152982 cri.go:89] found id: ""
	I0826 12:12:07.257220  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.257236  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:07.257244  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:07.257313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:07.289962  152982 cri.go:89] found id: ""
	I0826 12:12:07.290000  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.290012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:07.290018  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:07.290082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:07.323408  152982 cri.go:89] found id: ""
	I0826 12:12:07.323440  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.323452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:07.323461  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:07.323527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:07.358324  152982 cri.go:89] found id: ""
	I0826 12:12:07.358353  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.358362  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:07.358368  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:07.358436  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:07.393608  152982 cri.go:89] found id: ""
	I0826 12:12:07.393657  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.393666  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:07.393671  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:07.393739  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:07.427738  152982 cri.go:89] found id: ""
	I0826 12:12:07.427772  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.427782  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:07.427790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:07.427879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:07.466467  152982 cri.go:89] found id: ""
	I0826 12:12:07.466508  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.466520  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:07.466528  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:07.466603  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:07.501589  152982 cri.go:89] found id: ""
	I0826 12:12:07.501630  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.501645  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:07.501658  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:07.501678  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:07.550668  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:07.550708  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:07.564191  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:07.564224  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:07.638593  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:07.638626  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:07.638645  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:07.722262  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:07.722311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:10.265369  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:10.278719  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:10.278807  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:10.314533  152982 cri.go:89] found id: ""
	I0826 12:12:10.314568  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.314581  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:10.314589  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:10.314664  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:10.355983  152982 cri.go:89] found id: ""
	I0826 12:12:10.356014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.356023  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:10.356029  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:10.356091  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:10.391815  152982 cri.go:89] found id: ""
	I0826 12:12:10.391850  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.391860  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:10.391867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:10.391933  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:10.430280  152982 cri.go:89] found id: ""
	I0826 12:12:10.430309  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.430318  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:10.430324  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:10.430383  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:10.467983  152982 cri.go:89] found id: ""
	I0826 12:12:10.468014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.468025  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:10.468034  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:10.468103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:10.501682  152982 cri.go:89] found id: ""
	I0826 12:12:10.501712  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.501720  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:10.501726  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:10.501779  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:10.536760  152982 cri.go:89] found id: ""
	I0826 12:12:10.536790  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.536802  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:10.536810  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:10.536885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:10.572626  152982 cri.go:89] found id: ""
	I0826 12:12:10.572663  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.572677  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:10.572690  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:10.572707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:10.628207  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:10.628242  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:10.641767  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:10.641799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:10.716431  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:10.716463  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:10.716481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:10.801367  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:10.801416  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:13.346625  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:13.359838  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:13.359925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:13.393199  152982 cri.go:89] found id: ""
	I0826 12:12:13.393228  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.393241  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:13.393249  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:13.393321  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:13.429651  152982 cri.go:89] found id: ""
	I0826 12:12:13.429696  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.429709  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:13.429718  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:13.429778  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:13.463913  152982 cri.go:89] found id: ""
	I0826 12:12:13.463947  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.463959  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:13.463967  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:13.464035  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:13.498933  152982 cri.go:89] found id: ""
	I0826 12:12:13.498966  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.498977  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:13.498987  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:13.499064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:13.535136  152982 cri.go:89] found id: ""
	I0826 12:12:13.535166  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.535177  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:13.535185  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:13.535260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:13.573468  152982 cri.go:89] found id: ""
	I0826 12:12:13.573504  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.573516  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:13.573525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:13.573597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:13.612852  152982 cri.go:89] found id: ""
	I0826 12:12:13.612900  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.612913  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:13.612921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:13.612994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:13.649176  152982 cri.go:89] found id: ""
	I0826 12:12:13.649204  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.649220  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:13.649230  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:13.649247  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:13.663880  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:13.663908  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:13.741960  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:13.741982  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:13.741999  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:13.829286  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:13.829342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:13.868186  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:13.868218  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.422802  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:16.436680  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:16.436759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:16.471551  152982 cri.go:89] found id: ""
	I0826 12:12:16.471585  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.471605  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:16.471623  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:16.471695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:16.507468  152982 cri.go:89] found id: ""
	I0826 12:12:16.507504  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.507517  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:16.507526  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:16.507600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:16.542283  152982 cri.go:89] found id: ""
	I0826 12:12:16.542314  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.542325  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:16.542336  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:16.542406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:16.590390  152982 cri.go:89] found id: ""
	I0826 12:12:16.590429  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.590443  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:16.590452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:16.590593  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:16.625344  152982 cri.go:89] found id: ""
	I0826 12:12:16.625371  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.625382  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:16.625389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:16.625463  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:16.660153  152982 cri.go:89] found id: ""
	I0826 12:12:16.660194  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.660204  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:16.660211  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:16.660268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:16.696541  152982 cri.go:89] found id: ""
	I0826 12:12:16.696572  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.696580  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:16.696586  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:16.696655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:16.732416  152982 cri.go:89] found id: ""
	I0826 12:12:16.732448  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.732456  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:16.732469  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:16.732486  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:16.809058  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:16.809106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:16.848200  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:16.848269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.904985  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:16.905033  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:16.918966  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:16.919000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:16.989371  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.490349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:19.502851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:19.502946  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:19.534939  152982 cri.go:89] found id: ""
	I0826 12:12:19.534966  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.534974  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:19.534981  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:19.535036  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:19.567128  152982 cri.go:89] found id: ""
	I0826 12:12:19.567161  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.567177  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:19.567185  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:19.567257  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:19.601548  152982 cri.go:89] found id: ""
	I0826 12:12:19.601580  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.601590  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:19.601598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:19.601670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:19.636903  152982 cri.go:89] found id: ""
	I0826 12:12:19.636930  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.636938  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:19.636949  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:19.637018  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:19.670155  152982 cri.go:89] found id: ""
	I0826 12:12:19.670181  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.670190  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:19.670196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:19.670258  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:19.705052  152982 cri.go:89] found id: ""
	I0826 12:12:19.705079  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.705090  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:19.705099  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:19.705163  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:19.744106  152982 cri.go:89] found id: ""
	I0826 12:12:19.744136  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.744144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:19.744151  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:19.744227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:19.780078  152982 cri.go:89] found id: ""
	I0826 12:12:19.780107  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.780116  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:19.780126  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:19.780138  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:19.831821  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:19.831884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:19.847572  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:19.847610  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:19.924723  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.924745  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:19.924763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:20.001249  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:20.001292  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:22.540357  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:22.554408  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:22.554483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:22.588270  152982 cri.go:89] found id: ""
	I0826 12:12:22.588298  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.588310  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:22.588329  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:22.588411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:22.623979  152982 cri.go:89] found id: ""
	I0826 12:12:22.624003  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.624011  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:22.624016  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:22.624077  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:22.657151  152982 cri.go:89] found id: ""
	I0826 12:12:22.657185  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.657196  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:22.657204  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:22.657265  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:22.694187  152982 cri.go:89] found id: ""
	I0826 12:12:22.694217  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.694229  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:22.694237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:22.694327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:22.734911  152982 cri.go:89] found id: ""
	I0826 12:12:22.734948  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.734960  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:22.734968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:22.735039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:22.772754  152982 cri.go:89] found id: ""
	I0826 12:12:22.772790  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.772802  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:22.772809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:22.772877  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:22.810340  152982 cri.go:89] found id: ""
	I0826 12:12:22.810376  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.810385  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:22.810392  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:22.810467  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:22.847910  152982 cri.go:89] found id: ""
	I0826 12:12:22.847942  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.847953  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:22.847966  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:22.847984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:22.900871  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:22.900927  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:22.914758  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:22.914790  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:22.981736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:22.981766  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:22.981780  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:23.062669  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:23.062717  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:25.604600  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:25.617474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:25.617584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:25.653870  152982 cri.go:89] found id: ""
	I0826 12:12:25.653904  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.653917  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:25.653925  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:25.653993  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:25.693937  152982 cri.go:89] found id: ""
	I0826 12:12:25.693965  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.693973  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:25.693979  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:25.694039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:25.730590  152982 cri.go:89] found id: ""
	I0826 12:12:25.730622  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.730633  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:25.730640  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:25.730729  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:25.768192  152982 cri.go:89] found id: ""
	I0826 12:12:25.768221  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.768231  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:25.768240  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:25.768296  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:25.808518  152982 cri.go:89] found id: ""
	I0826 12:12:25.808545  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.808553  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:25.808559  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:25.808622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:25.843434  152982 cri.go:89] found id: ""
	I0826 12:12:25.843464  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.843475  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:25.843487  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:25.843561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:25.879093  152982 cri.go:89] found id: ""
	I0826 12:12:25.879124  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.879138  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:25.879146  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:25.879212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:25.915871  152982 cri.go:89] found id: ""
	I0826 12:12:25.915919  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.915932  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:25.915945  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:25.915973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:25.998597  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:25.998652  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:26.038701  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:26.038736  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:26.091618  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:26.091665  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:26.105349  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:26.105383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:26.178337  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:28.679177  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:28.695361  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:28.695455  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:28.734977  152982 cri.go:89] found id: ""
	I0826 12:12:28.735008  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.735026  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:28.735032  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:28.735107  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:28.771634  152982 cri.go:89] found id: ""
	I0826 12:12:28.771665  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.771677  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:28.771685  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:28.771763  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:28.810976  152982 cri.go:89] found id: ""
	I0826 12:12:28.811010  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.811022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:28.811030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:28.811098  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:28.850204  152982 cri.go:89] found id: ""
	I0826 12:12:28.850233  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.850241  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:28.850247  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:28.850300  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:28.888814  152982 cri.go:89] found id: ""
	I0826 12:12:28.888845  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.888852  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:28.888862  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:28.888923  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:28.925203  152982 cri.go:89] found id: ""
	I0826 12:12:28.925251  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.925264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:28.925273  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:28.925359  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:28.963656  152982 cri.go:89] found id: ""
	I0826 12:12:28.963684  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.963700  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:28.963706  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:28.963761  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:28.997644  152982 cri.go:89] found id: ""
	I0826 12:12:28.997677  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.997686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:28.997696  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:28.997711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:29.036668  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:29.036711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:29.089020  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:29.089064  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:29.103051  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:29.103083  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:29.173327  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:29.173363  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:29.173380  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:31.755073  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:31.769098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:31.769194  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:31.811919  152982 cri.go:89] found id: ""
	I0826 12:12:31.811950  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.811970  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:31.811978  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:31.812059  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:31.849728  152982 cri.go:89] found id: ""
	I0826 12:12:31.849760  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.849771  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:31.849778  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:31.849844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:31.884973  152982 cri.go:89] found id: ""
	I0826 12:12:31.885013  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.885022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:31.885030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:31.885088  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:31.925013  152982 cri.go:89] found id: ""
	I0826 12:12:31.925043  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.925052  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:31.925060  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:31.925121  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:31.960066  152982 cri.go:89] found id: ""
	I0826 12:12:31.960101  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.960112  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:31.960130  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:31.960205  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:31.994706  152982 cri.go:89] found id: ""
	I0826 12:12:31.994739  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.994747  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:31.994753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:31.994810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:32.030101  152982 cri.go:89] found id: ""
	I0826 12:12:32.030134  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.030142  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:32.030148  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:32.030213  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:32.064056  152982 cri.go:89] found id: ""
	I0826 12:12:32.064087  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.064095  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:32.064105  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:32.064118  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:32.115930  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:32.115974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:32.144522  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:32.144594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:32.216857  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:32.216886  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:32.216946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:32.293229  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:32.293268  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.833049  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:34.846325  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:34.846389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:34.879253  152982 cri.go:89] found id: ""
	I0826 12:12:34.879282  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.879299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:34.879308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:34.879377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:34.913351  152982 cri.go:89] found id: ""
	I0826 12:12:34.913381  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.913393  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:34.913401  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:34.913487  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:34.946929  152982 cri.go:89] found id: ""
	I0826 12:12:34.946958  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.946966  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:34.946972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:34.947040  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:34.980517  152982 cri.go:89] found id: ""
	I0826 12:12:34.980559  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.980571  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:34.980580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:34.980651  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:35.015853  152982 cri.go:89] found id: ""
	I0826 12:12:35.015886  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.015894  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:35.015909  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:35.015972  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:35.053568  152982 cri.go:89] found id: ""
	I0826 12:12:35.053597  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.053606  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:35.053613  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:35.053667  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:35.091369  152982 cri.go:89] found id: ""
	I0826 12:12:35.091398  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.091408  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:35.091415  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:35.091483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:35.129233  152982 cri.go:89] found id: ""
	I0826 12:12:35.129259  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.129267  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:35.129276  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:35.129288  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:35.181977  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:35.182016  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:35.195780  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:35.195812  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:35.274390  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:35.274416  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:35.274433  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:35.353774  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:35.353819  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:37.894664  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:37.908390  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:37.908480  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:37.943642  152982 cri.go:89] found id: ""
	I0826 12:12:37.943669  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.943681  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:37.943689  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:37.943759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:37.978371  152982 cri.go:89] found id: ""
	I0826 12:12:37.978407  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.978418  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:37.978426  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:37.978497  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:38.014205  152982 cri.go:89] found id: ""
	I0826 12:12:38.014237  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.014248  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:38.014255  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:38.014326  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:38.048705  152982 cri.go:89] found id: ""
	I0826 12:12:38.048737  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.048748  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:38.048758  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:38.048824  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:38.085009  152982 cri.go:89] found id: ""
	I0826 12:12:38.085039  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.085050  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:38.085058  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:38.085147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:38.125923  152982 cri.go:89] found id: ""
	I0826 12:12:38.125949  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.125960  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:38.125968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:38.126038  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:38.161460  152982 cri.go:89] found id: ""
	I0826 12:12:38.161492  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.161504  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:38.161512  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:38.161584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:38.194433  152982 cri.go:89] found id: ""
	I0826 12:12:38.194462  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.194472  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:38.194481  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:38.194494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.245809  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:38.245854  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:38.261100  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:38.261141  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:38.329187  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:38.329218  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:38.329237  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:38.416798  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:38.416844  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:40.962763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:40.976214  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:40.976287  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:41.010312  152982 cri.go:89] found id: ""
	I0826 12:12:41.010346  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.010356  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:41.010363  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:41.010422  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:41.051708  152982 cri.go:89] found id: ""
	I0826 12:12:41.051738  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.051746  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:41.051753  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:41.051818  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:41.087107  152982 cri.go:89] found id: ""
	I0826 12:12:41.087140  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.087152  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:41.087161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:41.087238  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:41.125099  152982 cri.go:89] found id: ""
	I0826 12:12:41.125132  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.125144  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:41.125153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:41.125216  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:41.160192  152982 cri.go:89] found id: ""
	I0826 12:12:41.160220  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.160227  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:41.160234  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:41.160291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:41.193507  152982 cri.go:89] found id: ""
	I0826 12:12:41.193536  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.193548  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:41.193557  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:41.193650  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:41.235788  152982 cri.go:89] found id: ""
	I0826 12:12:41.235827  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.235835  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:41.235841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:41.235897  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:41.271720  152982 cri.go:89] found id: ""
	I0826 12:12:41.271755  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.271770  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:41.271780  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:41.271794  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:41.285694  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:41.285731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:41.351221  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:41.351245  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:41.351261  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:41.434748  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:41.434792  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:41.472446  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:41.472477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:44.022222  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:44.036128  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:44.036201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:44.071142  152982 cri.go:89] found id: ""
	I0826 12:12:44.071177  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.071187  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:44.071196  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:44.071267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:44.105068  152982 cri.go:89] found id: ""
	I0826 12:12:44.105101  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.105110  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:44.105116  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:44.105184  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:44.140069  152982 cri.go:89] found id: ""
	I0826 12:12:44.140102  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.140113  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:44.140121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:44.140190  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:44.177686  152982 cri.go:89] found id: ""
	I0826 12:12:44.177724  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.177736  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:44.177744  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:44.177819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:44.214326  152982 cri.go:89] found id: ""
	I0826 12:12:44.214356  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.214364  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:44.214371  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:44.214426  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:44.251675  152982 cri.go:89] found id: ""
	I0826 12:12:44.251703  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.251711  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:44.251718  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:44.251776  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:44.303077  152982 cri.go:89] found id: ""
	I0826 12:12:44.303107  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.303116  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:44.303122  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:44.303183  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:44.355913  152982 cri.go:89] found id: ""
	I0826 12:12:44.355944  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.355952  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:44.355962  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:44.355974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:44.421610  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:44.421653  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:44.435567  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:44.435603  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:44.501406  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:44.501427  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:44.501440  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:44.582723  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:44.582763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:47.124026  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:47.139183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:47.139260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:47.175395  152982 cri.go:89] found id: ""
	I0826 12:12:47.175424  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.175440  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:47.175450  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:47.175514  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:47.214536  152982 cri.go:89] found id: ""
	I0826 12:12:47.214568  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.214580  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:47.214588  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:47.214655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:47.255297  152982 cri.go:89] found id: ""
	I0826 12:12:47.255321  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.255329  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:47.255335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:47.255402  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:47.290638  152982 cri.go:89] found id: ""
	I0826 12:12:47.290666  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.290675  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:47.290681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:47.290736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:47.327313  152982 cri.go:89] found id: ""
	I0826 12:12:47.327345  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.327352  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:47.327359  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:47.327425  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:47.366221  152982 cri.go:89] found id: ""
	I0826 12:12:47.366256  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.366264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:47.366274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:47.366331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:47.401043  152982 cri.go:89] found id: ""
	I0826 12:12:47.401077  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.401088  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:47.401095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:47.401166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:47.435800  152982 cri.go:89] found id: ""
	I0826 12:12:47.435837  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.435848  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:47.435860  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:47.435881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:47.487917  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:47.487955  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:47.501696  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:47.501731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:47.569026  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:47.569053  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:47.569069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:47.651002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:47.651049  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:50.192329  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:50.213937  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:50.214017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:50.253835  152982 cri.go:89] found id: ""
	I0826 12:12:50.253868  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.253879  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:50.253890  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:50.253957  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:50.296898  152982 cri.go:89] found id: ""
	I0826 12:12:50.296928  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.296939  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:50.296946  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:50.297016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:50.350327  152982 cri.go:89] found id: ""
	I0826 12:12:50.350356  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.350365  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:50.350375  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:50.350443  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:50.385191  152982 cri.go:89] found id: ""
	I0826 12:12:50.385225  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.385236  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:50.385243  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:50.385309  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:50.418371  152982 cri.go:89] found id: ""
	I0826 12:12:50.418412  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.418423  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:50.418432  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:50.418505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:50.450924  152982 cri.go:89] found id: ""
	I0826 12:12:50.450956  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.450965  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:50.450972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:50.451043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:50.485695  152982 cri.go:89] found id: ""
	I0826 12:12:50.485728  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.485739  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:50.485748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:50.485819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:50.519570  152982 cri.go:89] found id: ""
	I0826 12:12:50.519609  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.519622  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:50.519633  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:50.519650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:50.572959  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:50.573001  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:50.586794  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:50.586826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:50.654148  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:50.654180  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:50.654255  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:50.738067  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:50.738107  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:53.281246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:53.296023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:53.296103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:53.333031  152982 cri.go:89] found id: ""
	I0826 12:12:53.333073  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.333092  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:53.333100  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:53.333171  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:53.367753  152982 cri.go:89] found id: ""
	I0826 12:12:53.367782  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.367791  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:53.367796  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:53.367849  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:53.403702  152982 cri.go:89] found id: ""
	I0826 12:12:53.403733  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.403745  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:53.403753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:53.403823  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:53.439911  152982 cri.go:89] found id: ""
	I0826 12:12:53.439939  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.439947  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:53.439953  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:53.440008  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:53.475053  152982 cri.go:89] found id: ""
	I0826 12:12:53.475079  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.475088  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:53.475094  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:53.475152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:53.509087  152982 cri.go:89] found id: ""
	I0826 12:12:53.509117  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.509128  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:53.509136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:53.509207  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:53.546090  152982 cri.go:89] found id: ""
	I0826 12:12:53.546123  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.546133  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:53.546139  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:53.546195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:53.581675  152982 cri.go:89] found id: ""
	I0826 12:12:53.581713  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.581727  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:53.581741  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:53.581756  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:53.632866  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:53.632929  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:53.646045  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:53.646079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:53.716768  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:53.716798  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:53.716814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:53.799490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:53.799541  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.340389  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:56.353305  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:56.353377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:56.389690  152982 cri.go:89] found id: ""
	I0826 12:12:56.389725  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.389733  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:56.389741  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:56.389797  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:56.423214  152982 cri.go:89] found id: ""
	I0826 12:12:56.423245  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.423253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:56.423260  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:56.423315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:56.459033  152982 cri.go:89] found id: ""
	I0826 12:12:56.459069  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.459077  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:56.459083  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:56.459141  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:56.494408  152982 cri.go:89] found id: ""
	I0826 12:12:56.494437  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.494446  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:56.494453  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:56.494507  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:56.533471  152982 cri.go:89] found id: ""
	I0826 12:12:56.533506  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.533517  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:56.533525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:56.533595  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:56.572644  152982 cri.go:89] found id: ""
	I0826 12:12:56.572675  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.572685  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:56.572690  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:56.572769  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:56.610948  152982 cri.go:89] found id: ""
	I0826 12:12:56.610978  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.610989  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:56.610997  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:56.611161  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:56.651352  152982 cri.go:89] found id: ""
	I0826 12:12:56.651391  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.651406  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:56.651419  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:56.651446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:56.666627  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:56.666664  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:56.741054  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:56.741087  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:56.741106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:56.818138  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:56.818194  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.858182  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:56.858216  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.412428  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:59.426340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:59.426410  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:59.459975  152982 cri.go:89] found id: ""
	I0826 12:12:59.460011  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.460021  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:59.460027  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:59.460082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:59.491890  152982 cri.go:89] found id: ""
	I0826 12:12:59.491918  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.491928  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:59.491934  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:59.491994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:59.527284  152982 cri.go:89] found id: ""
	I0826 12:12:59.527318  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.527330  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:59.527339  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:59.527411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:59.560996  152982 cri.go:89] found id: ""
	I0826 12:12:59.561027  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.561036  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:59.561042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:59.561096  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:59.595827  152982 cri.go:89] found id: ""
	I0826 12:12:59.595858  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.595866  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:59.595882  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:59.595970  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:59.632943  152982 cri.go:89] found id: ""
	I0826 12:12:59.632981  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.632993  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:59.633001  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:59.633071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:59.669013  152982 cri.go:89] found id: ""
	I0826 12:12:59.669047  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.669057  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:59.669065  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:59.669139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:59.703286  152982 cri.go:89] found id: ""
	I0826 12:12:59.703320  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.703331  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:59.703342  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:59.703359  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.756848  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:59.756882  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:59.770551  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:59.770592  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:59.842153  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:59.842176  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:59.842190  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:59.925190  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:59.925231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:02.464977  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:02.478901  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:02.478991  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:02.514845  152982 cri.go:89] found id: ""
	I0826 12:13:02.514890  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.514903  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:02.514912  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:02.514980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:02.550867  152982 cri.go:89] found id: ""
	I0826 12:13:02.550899  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.550910  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:02.550918  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:02.550988  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:02.585494  152982 cri.go:89] found id: ""
	I0826 12:13:02.585522  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.585531  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:02.585537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:02.585617  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:02.623561  152982 cri.go:89] found id: ""
	I0826 12:13:02.623603  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.623619  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:02.623630  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:02.623696  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:02.660636  152982 cri.go:89] found id: ""
	I0826 12:13:02.660665  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.660675  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:02.660683  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:02.660760  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:02.696140  152982 cri.go:89] found id: ""
	I0826 12:13:02.696173  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.696184  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:02.696192  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:02.696260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:02.735056  152982 cri.go:89] found id: ""
	I0826 12:13:02.735098  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.735111  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:02.735121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:02.735201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:02.770841  152982 cri.go:89] found id: ""
	I0826 12:13:02.770886  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.770899  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:02.770911  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:02.770928  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:02.845458  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:02.845498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:02.885537  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:02.885574  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:02.935507  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:02.935560  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:02.950010  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:02.950046  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:03.018963  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.520071  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:05.535473  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:05.535554  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:05.572890  152982 cri.go:89] found id: ""
	I0826 12:13:05.572923  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.572934  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:05.572942  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:05.573019  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:05.610469  152982 cri.go:89] found id: ""
	I0826 12:13:05.610503  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.610515  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:05.610522  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:05.610586  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:05.647446  152982 cri.go:89] found id: ""
	I0826 12:13:05.647480  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.647489  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:05.647495  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:05.647561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:05.686619  152982 cri.go:89] found id: ""
	I0826 12:13:05.686660  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.686672  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:05.686681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:05.686754  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:05.725893  152982 cri.go:89] found id: ""
	I0826 12:13:05.725927  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.725936  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:05.725943  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:05.726034  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:05.761052  152982 cri.go:89] found id: ""
	I0826 12:13:05.761081  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.761089  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:05.761095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:05.761147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:05.795336  152982 cri.go:89] found id: ""
	I0826 12:13:05.795367  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.795379  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:05.795387  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:05.795447  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:05.834397  152982 cri.go:89] found id: ""
	I0826 12:13:05.834441  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.834449  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:05.834459  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:05.834472  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:05.847882  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:05.847919  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:05.921941  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.921965  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:05.921982  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:06.001380  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:06.001424  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:06.040519  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:06.040555  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:08.591761  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:08.604628  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:08.604724  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:08.639915  152982 cri.go:89] found id: ""
	I0826 12:13:08.639948  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.639957  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:08.639963  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:08.640025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:08.684479  152982 cri.go:89] found id: ""
	I0826 12:13:08.684513  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.684526  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:08.684535  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:08.684613  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:08.724083  152982 cri.go:89] found id: ""
	I0826 12:13:08.724112  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.724121  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:08.724127  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:08.724182  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:08.760781  152982 cri.go:89] found id: ""
	I0826 12:13:08.760830  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.760842  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:08.760851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:08.760943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:08.795685  152982 cri.go:89] found id: ""
	I0826 12:13:08.795715  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.795723  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:08.795730  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:08.795786  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:08.832123  152982 cri.go:89] found id: ""
	I0826 12:13:08.832152  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.832161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:08.832167  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:08.832227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:08.869701  152982 cri.go:89] found id: ""
	I0826 12:13:08.869735  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.869752  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:08.869760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:08.869827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:08.905399  152982 cri.go:89] found id: ""
	I0826 12:13:08.905444  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.905455  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:08.905469  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:08.905485  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:08.956814  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:08.956857  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:08.971618  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:08.971656  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:09.039360  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:09.039389  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:09.039407  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:09.113464  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:09.113509  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:11.658989  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:11.671816  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:11.671898  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:11.707124  152982 cri.go:89] found id: ""
	I0826 12:13:11.707150  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.707158  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:11.707165  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:11.707230  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:11.743127  152982 cri.go:89] found id: ""
	I0826 12:13:11.743155  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.743163  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:11.743169  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:11.743249  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:11.777664  152982 cri.go:89] found id: ""
	I0826 12:13:11.777693  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.777701  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:11.777707  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:11.777766  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:11.811555  152982 cri.go:89] found id: ""
	I0826 12:13:11.811585  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.811593  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:11.811599  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:11.811658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:11.846187  152982 cri.go:89] found id: ""
	I0826 12:13:11.846216  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.846223  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:11.846229  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:11.846291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:11.882261  152982 cri.go:89] found id: ""
	I0826 12:13:11.882292  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.882310  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:11.882318  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:11.882386  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:11.920538  152982 cri.go:89] found id: ""
	I0826 12:13:11.920572  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.920583  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:11.920590  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:11.920658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:11.955402  152982 cri.go:89] found id: ""
	I0826 12:13:11.955435  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.955446  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:11.955456  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:11.955473  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:12.007676  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:12.007723  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:12.021378  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:12.021417  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:12.087841  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:12.087868  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:12.087883  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:12.170948  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:12.170991  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:14.712383  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:14.724904  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:14.724982  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:14.759675  152982 cri.go:89] found id: ""
	I0826 12:13:14.759703  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.759711  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:14.759717  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:14.759784  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:14.794440  152982 cri.go:89] found id: ""
	I0826 12:13:14.794471  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.794480  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:14.794488  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:14.794542  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:14.832392  152982 cri.go:89] found id: ""
	I0826 12:13:14.832442  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.832452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:14.832459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:14.832524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:14.870231  152982 cri.go:89] found id: ""
	I0826 12:13:14.870262  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.870273  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:14.870281  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:14.870339  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:14.909480  152982 cri.go:89] found id: ""
	I0826 12:13:14.909517  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.909529  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:14.909536  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:14.909596  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:14.950957  152982 cri.go:89] found id: ""
	I0826 12:13:14.950986  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.950997  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:14.951005  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:14.951071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:14.995137  152982 cri.go:89] found id: ""
	I0826 12:13:14.995165  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.995176  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:14.995183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:14.995252  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:15.029939  152982 cri.go:89] found id: ""
	I0826 12:13:15.029969  152982 logs.go:276] 0 containers: []
	W0826 12:13:15.029978  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:15.029987  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:15.030000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:15.106633  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:15.106675  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:15.152575  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:15.152613  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:15.205645  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:15.205689  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:15.220325  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:15.220369  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:15.289698  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:17.790709  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:17.804332  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:17.804398  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:17.839735  152982 cri.go:89] found id: ""
	I0826 12:13:17.839779  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.839791  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:17.839803  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:17.839885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:17.875476  152982 cri.go:89] found id: ""
	I0826 12:13:17.875510  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.875521  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:17.875529  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:17.875606  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:17.911715  152982 cri.go:89] found id: ""
	I0826 12:13:17.911745  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.911753  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:17.911760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:17.911822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:17.949059  152982 cri.go:89] found id: ""
	I0826 12:13:17.949094  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.949102  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:17.949109  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:17.949166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:17.985319  152982 cri.go:89] found id: ""
	I0826 12:13:17.985365  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.985376  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:17.985385  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:17.985481  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:18.019796  152982 cri.go:89] found id: ""
	I0826 12:13:18.019839  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.019858  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:18.019867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:18.019931  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:18.053910  152982 cri.go:89] found id: ""
	I0826 12:13:18.053941  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.053953  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:18.053960  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:18.054039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:18.089854  152982 cri.go:89] found id: ""
	I0826 12:13:18.089888  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.089901  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:18.089917  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:18.089934  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:18.143026  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:18.143070  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.156710  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:18.156740  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:18.222894  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:18.222929  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:18.222946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:18.298729  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:18.298777  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:20.837506  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:20.851070  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:20.851152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:20.886253  152982 cri.go:89] found id: ""
	I0826 12:13:20.886289  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.886299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:20.886308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:20.886384  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:20.923773  152982 cri.go:89] found id: ""
	I0826 12:13:20.923803  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.923821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:20.923827  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:20.923884  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:20.959117  152982 cri.go:89] found id: ""
	I0826 12:13:20.959151  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.959162  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:20.959170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:20.959239  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:20.994088  152982 cri.go:89] found id: ""
	I0826 12:13:20.994121  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.994131  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:20.994138  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:20.994203  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:21.031140  152982 cri.go:89] found id: ""
	I0826 12:13:21.031171  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.031183  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:21.031198  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:21.031267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:21.064624  152982 cri.go:89] found id: ""
	I0826 12:13:21.064654  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.064666  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:21.064674  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:21.064743  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:21.100146  152982 cri.go:89] found id: ""
	I0826 12:13:21.100182  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.100190  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:21.100197  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:21.100268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:21.149001  152982 cri.go:89] found id: ""
	I0826 12:13:21.149031  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.149040  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:21.149054  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:21.149074  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:21.229783  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:21.229809  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:21.229826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:21.305579  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:21.305619  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:21.343856  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:21.343884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:21.394183  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:21.394231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:23.908368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:23.922748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:23.922840  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:23.964168  152982 cri.go:89] found id: ""
	I0826 12:13:23.964199  152982 logs.go:276] 0 containers: []
	W0826 12:13:23.964209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:23.964218  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:23.964290  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:24.001156  152982 cri.go:89] found id: ""
	I0826 12:13:24.001186  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.001199  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:24.001204  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:24.001268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:24.040001  152982 cri.go:89] found id: ""
	I0826 12:13:24.040037  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.040057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:24.040067  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:24.040139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:24.076901  152982 cri.go:89] found id: ""
	I0826 12:13:24.076940  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.076948  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:24.076955  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:24.077028  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:24.129347  152982 cri.go:89] found id: ""
	I0826 12:13:24.129375  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.129383  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:24.129389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:24.129457  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:24.169634  152982 cri.go:89] found id: ""
	I0826 12:13:24.169666  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.169678  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:24.169685  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:24.169740  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:24.206976  152982 cri.go:89] found id: ""
	I0826 12:13:24.207006  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.207015  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:24.207023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:24.207092  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:24.243755  152982 cri.go:89] found id: ""
	I0826 12:13:24.243790  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.243800  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:24.243812  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:24.243829  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:24.323085  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:24.323131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:24.362404  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:24.362436  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:24.411863  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:24.411910  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:24.425742  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:24.425776  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:24.492510  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:26.993510  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:27.007233  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:27.007304  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:27.041360  152982 cri.go:89] found id: ""
	I0826 12:13:27.041392  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.041401  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:27.041407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:27.041470  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:27.076040  152982 cri.go:89] found id: ""
	I0826 12:13:27.076069  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.076080  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:27.076088  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:27.076160  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:27.114288  152982 cri.go:89] found id: ""
	I0826 12:13:27.114325  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.114336  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:27.114345  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:27.114418  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:27.148538  152982 cri.go:89] found id: ""
	I0826 12:13:27.148572  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.148582  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:27.148588  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:27.148665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:27.182331  152982 cri.go:89] found id: ""
	I0826 12:13:27.182362  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.182373  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:27.182382  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:27.182453  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:27.218645  152982 cri.go:89] found id: ""
	I0826 12:13:27.218698  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.218710  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:27.218720  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:27.218798  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:27.254987  152982 cri.go:89] found id: ""
	I0826 12:13:27.255021  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.255031  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:27.255037  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:27.255097  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:27.289633  152982 cri.go:89] found id: ""
	I0826 12:13:27.289662  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.289672  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:27.289683  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:27.289705  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:27.338387  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:27.338429  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:27.353764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:27.353799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:27.425833  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:27.425855  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:27.425870  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:27.507035  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:27.507078  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.047763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:30.063283  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:30.063382  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:30.100161  152982 cri.go:89] found id: ""
	I0826 12:13:30.100194  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.100207  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:30.100215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:30.100276  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:30.136507  152982 cri.go:89] found id: ""
	I0826 12:13:30.136542  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.136554  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:30.136561  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:30.136632  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:30.170023  152982 cri.go:89] found id: ""
	I0826 12:13:30.170058  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.170066  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:30.170071  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:30.170128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:30.204979  152982 cri.go:89] found id: ""
	I0826 12:13:30.205022  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.205032  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:30.205062  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:30.205135  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:30.242407  152982 cri.go:89] found id: ""
	I0826 12:13:30.242442  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.242455  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:30.242463  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:30.242532  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:30.280569  152982 cri.go:89] found id: ""
	I0826 12:13:30.280607  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.280619  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:30.280627  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:30.280684  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:30.317846  152982 cri.go:89] found id: ""
	I0826 12:13:30.317882  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.317892  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:30.317906  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:30.318011  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:30.354637  152982 cri.go:89] found id: ""
	I0826 12:13:30.354675  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.354686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:30.354698  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:30.354715  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:30.434983  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:30.435032  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.474170  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:30.474214  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:30.541092  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:30.541133  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:30.566671  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:30.566707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:30.659622  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:33.160831  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:33.174476  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:33.174556  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:33.213402  152982 cri.go:89] found id: ""
	I0826 12:13:33.213433  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.213441  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:33.213447  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:33.213505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:33.251024  152982 cri.go:89] found id: ""
	I0826 12:13:33.251056  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.251064  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:33.251070  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:33.251134  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:33.288839  152982 cri.go:89] found id: ""
	I0826 12:13:33.288873  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.288882  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:33.288889  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:33.288961  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:33.324289  152982 cri.go:89] found id: ""
	I0826 12:13:33.324321  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.324329  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:33.324335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:33.324404  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:33.358921  152982 cri.go:89] found id: ""
	I0826 12:13:33.358953  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.358961  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:33.358968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:33.359025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:33.394579  152982 cri.go:89] found id: ""
	I0826 12:13:33.394615  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.394623  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:33.394629  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:33.394700  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:33.429750  152982 cri.go:89] found id: ""
	I0826 12:13:33.429782  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.429794  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:33.429802  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:33.429863  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:33.465857  152982 cri.go:89] found id: ""
	I0826 12:13:33.465895  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.465908  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:33.465921  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:33.465939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:33.506312  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:33.506344  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:33.557235  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:33.557279  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:33.570259  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:33.570293  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:33.638927  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:33.638952  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:33.638973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:36.217153  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:36.230544  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:36.230630  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:36.283359  152982 cri.go:89] found id: ""
	I0826 12:13:36.283394  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.283405  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:36.283413  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:36.283486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:36.327991  152982 cri.go:89] found id: ""
	I0826 12:13:36.328017  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.328026  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:36.328031  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:36.328095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:36.380106  152982 cri.go:89] found id: ""
	I0826 12:13:36.380137  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.380147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:36.380154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:36.380212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:36.415844  152982 cri.go:89] found id: ""
	I0826 12:13:36.415872  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.415880  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:36.415886  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:36.415939  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:36.451058  152982 cri.go:89] found id: ""
	I0826 12:13:36.451131  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.451158  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:36.451168  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:36.451235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:36.485814  152982 cri.go:89] found id: ""
	I0826 12:13:36.485845  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.485856  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:36.485864  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:36.485943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:36.520811  152982 cri.go:89] found id: ""
	I0826 12:13:36.520848  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.520865  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:36.520876  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:36.520952  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:36.557835  152982 cri.go:89] found id: ""
	I0826 12:13:36.557866  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.557877  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:36.557897  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:36.557915  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:36.609551  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:36.609594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:36.624424  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:36.624453  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:36.697267  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:36.697294  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:36.697312  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:36.781810  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:36.781862  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.326306  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:39.340161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:39.340229  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:39.373614  152982 cri.go:89] found id: ""
	I0826 12:13:39.373646  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.373655  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:39.373664  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:39.373732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:39.408021  152982 cri.go:89] found id: ""
	I0826 12:13:39.408059  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.408067  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:39.408073  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:39.408127  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:39.450503  152982 cri.go:89] found id: ""
	I0826 12:13:39.450531  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.450541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:39.450549  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:39.450624  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:39.487553  152982 cri.go:89] found id: ""
	I0826 12:13:39.487585  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.487596  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:39.487625  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:39.487695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:39.524701  152982 cri.go:89] found id: ""
	I0826 12:13:39.524734  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.524745  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:39.524753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:39.524822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:39.557863  152982 cri.go:89] found id: ""
	I0826 12:13:39.557893  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.557903  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:39.557911  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:39.557979  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:39.593456  152982 cri.go:89] found id: ""
	I0826 12:13:39.593486  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.593496  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:39.593504  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:39.593577  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:39.628444  152982 cri.go:89] found id: ""
	I0826 12:13:39.628472  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.628481  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:39.628490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:39.628503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.668929  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:39.668967  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:39.724948  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:39.725003  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:39.740014  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:39.740060  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:39.814786  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:39.814811  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:39.814828  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:42.393781  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:42.407529  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:42.407620  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:42.444273  152982 cri.go:89] found id: ""
	I0826 12:13:42.444305  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.444314  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:42.444321  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:42.444389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:42.478683  152982 cri.go:89] found id: ""
	I0826 12:13:42.478724  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.478734  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:42.478741  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:42.478803  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:42.520650  152982 cri.go:89] found id: ""
	I0826 12:13:42.520684  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.520708  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:42.520715  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:42.520774  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:42.558610  152982 cri.go:89] found id: ""
	I0826 12:13:42.558656  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.558667  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:42.558677  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:42.558750  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:42.593960  152982 cri.go:89] found id: ""
	I0826 12:13:42.593991  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.593999  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:42.594006  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:42.594064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:42.628257  152982 cri.go:89] found id: ""
	I0826 12:13:42.628284  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.628294  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:42.628300  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:42.628372  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:42.669894  152982 cri.go:89] found id: ""
	I0826 12:13:42.669933  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.669946  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:42.669956  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:42.670029  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:42.707893  152982 cri.go:89] found id: ""
	I0826 12:13:42.707923  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.707934  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:42.707946  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:42.707962  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:42.760778  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:42.760823  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:42.773718  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:42.773753  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:42.855780  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:42.855813  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:42.855831  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:42.934872  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:42.934925  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:45.473505  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:45.488485  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:45.488582  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:45.524355  152982 cri.go:89] found id: ""
	I0826 12:13:45.524387  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.524398  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:45.524407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:45.524474  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:45.563731  152982 cri.go:89] found id: ""
	I0826 12:13:45.563758  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.563767  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:45.563772  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:45.563832  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:45.595876  152982 cri.go:89] found id: ""
	I0826 12:13:45.595910  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.595918  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:45.595924  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:45.595977  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:45.629212  152982 cri.go:89] found id: ""
	I0826 12:13:45.629246  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.629256  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:45.629262  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:45.629316  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:45.662718  152982 cri.go:89] found id: ""
	I0826 12:13:45.662748  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.662759  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:45.662766  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:45.662851  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:45.697540  152982 cri.go:89] found id: ""
	I0826 12:13:45.697573  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.697585  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:45.697598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:45.697670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:45.738012  152982 cri.go:89] found id: ""
	I0826 12:13:45.738054  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.738067  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:45.738077  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:45.738174  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:45.778322  152982 cri.go:89] found id: ""
	I0826 12:13:45.778352  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.778364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:45.778376  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:45.778395  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:45.830530  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:45.830570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:45.845289  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:45.845335  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:45.918490  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:45.918514  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:45.918528  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:45.998762  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:45.998806  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:48.540076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:48.554537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:48.554616  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:48.589750  152982 cri.go:89] found id: ""
	I0826 12:13:48.589783  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.589792  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:48.589799  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:48.589866  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.628496  152982 cri.go:89] found id: ""
	I0826 12:13:48.628530  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.628540  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:48.628557  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:48.628635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:48.670630  152982 cri.go:89] found id: ""
	I0826 12:13:48.670667  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.670678  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:48.670686  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:48.670756  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:48.707510  152982 cri.go:89] found id: ""
	I0826 12:13:48.707543  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.707564  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:48.707572  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:48.707642  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:48.752189  152982 cri.go:89] found id: ""
	I0826 12:13:48.752222  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.752231  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:48.752237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:48.752306  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:48.788294  152982 cri.go:89] found id: ""
	I0826 12:13:48.788332  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.788356  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:48.788364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:48.788439  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:48.822728  152982 cri.go:89] found id: ""
	I0826 12:13:48.822755  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.822765  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:48.822771  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:48.822850  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:48.859237  152982 cri.go:89] found id: ""
	I0826 12:13:48.859270  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.859280  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:48.859293  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:48.859310  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:48.944271  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:48.944322  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:48.983438  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:48.983477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:49.036463  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:49.036511  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:49.051081  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:49.051123  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:49.127953  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:51.629023  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:51.643644  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:51.643728  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:51.684273  152982 cri.go:89] found id: ""
	I0826 12:13:51.684310  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.684323  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:51.684331  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:51.684401  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:51.720561  152982 cri.go:89] found id: ""
	I0826 12:13:51.720600  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.720610  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:51.720616  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:51.720690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:51.758023  152982 cri.go:89] found id: ""
	I0826 12:13:51.758049  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.758057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:51.758063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:51.758123  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:51.797029  152982 cri.go:89] found id: ""
	I0826 12:13:51.797063  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.797075  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:51.797082  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:51.797150  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:51.832002  152982 cri.go:89] found id: ""
	I0826 12:13:51.832032  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.832043  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:51.832051  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:51.832122  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:51.867042  152982 cri.go:89] found id: ""
	I0826 12:13:51.867074  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.867083  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:51.867090  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:51.867155  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:51.904887  152982 cri.go:89] found id: ""
	I0826 12:13:51.904919  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.904931  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:51.904938  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:51.905005  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:51.940628  152982 cri.go:89] found id: ""
	I0826 12:13:51.940662  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.940674  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:51.940686  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:51.940703  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:51.979988  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:51.980021  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:52.033297  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:52.033338  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:52.047004  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:52.047039  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:52.126136  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:52.126163  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:52.126176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:54.711457  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:54.726419  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:54.726510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:54.773253  152982 cri.go:89] found id: ""
	I0826 12:13:54.773290  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.773304  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:54.773324  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:54.773397  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:54.812175  152982 cri.go:89] found id: ""
	I0826 12:13:54.812211  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.812232  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:54.812239  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:54.812298  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:54.848673  152982 cri.go:89] found id: ""
	I0826 12:13:54.848702  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.848710  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:54.848717  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:54.848782  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:54.884211  152982 cri.go:89] found id: ""
	I0826 12:13:54.884239  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.884252  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:54.884259  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:54.884329  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:54.925279  152982 cri.go:89] found id: ""
	I0826 12:13:54.925312  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.925323  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:54.925331  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:54.925406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:54.961004  152982 cri.go:89] found id: ""
	I0826 12:13:54.961035  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.961043  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:54.961050  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:54.961114  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:54.998689  152982 cri.go:89] found id: ""
	I0826 12:13:54.998720  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.998730  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:54.998737  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:54.998810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:55.033540  152982 cri.go:89] found id: ""
	I0826 12:13:55.033671  152982 logs.go:276] 0 containers: []
	W0826 12:13:55.033683  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:55.033696  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:55.033713  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:55.082966  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:55.083006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:55.096472  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:55.096503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:55.166868  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:55.166899  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:55.166917  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:55.260596  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:55.260637  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:57.804727  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:57.818098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:57.818188  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:57.852777  152982 cri.go:89] found id: ""
	I0826 12:13:57.852819  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.852832  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:57.852841  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:57.852906  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:57.888778  152982 cri.go:89] found id: ""
	I0826 12:13:57.888815  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.888832  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:57.888840  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:57.888924  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:57.927398  152982 cri.go:89] found id: ""
	I0826 12:13:57.927432  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.927444  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:57.927452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:57.927527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:57.965373  152982 cri.go:89] found id: ""
	I0826 12:13:57.965402  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.965420  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:57.965425  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:57.965488  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:57.999334  152982 cri.go:89] found id: ""
	I0826 12:13:57.999366  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.999374  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:57.999380  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:57.999441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:58.035268  152982 cri.go:89] found id: ""
	I0826 12:13:58.035299  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.035308  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:58.035313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:58.035373  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:58.070055  152982 cri.go:89] found id: ""
	I0826 12:13:58.070088  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.070099  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:58.070107  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:58.070176  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:58.104845  152982 cri.go:89] found id: ""
	I0826 12:13:58.104882  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.104893  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:58.104906  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:58.104923  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:58.149392  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:58.149427  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:58.201310  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:58.201345  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:58.217027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:58.217067  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:58.301347  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.301372  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:58.301389  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:00.881924  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:00.897716  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:14:00.897804  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:14:00.934959  152982 cri.go:89] found id: ""
	I0826 12:14:00.934993  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.935005  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:14:00.935013  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:14:00.935086  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:14:00.969225  152982 cri.go:89] found id: ""
	I0826 12:14:00.969257  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.969266  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:14:00.969272  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:14:00.969344  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:14:01.004010  152982 cri.go:89] found id: ""
	I0826 12:14:01.004047  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.004057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:14:01.004063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:14:01.004136  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:14:01.039659  152982 cri.go:89] found id: ""
	I0826 12:14:01.039689  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.039697  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:14:01.039704  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:14:01.039758  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:14:01.073234  152982 cri.go:89] found id: ""
	I0826 12:14:01.073266  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.073278  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:14:01.073293  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:14:01.073370  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:14:01.111187  152982 cri.go:89] found id: ""
	I0826 12:14:01.111229  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.111243  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:14:01.111261  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:14:01.111331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:14:01.145754  152982 cri.go:89] found id: ""
	I0826 12:14:01.145791  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.145803  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:14:01.145811  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:14:01.145885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:14:01.182342  152982 cri.go:89] found id: ""
	I0826 12:14:01.182386  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.182398  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:14:01.182412  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:14:01.182434  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:01.266710  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:14:01.266754  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:14:01.305346  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:14:01.305385  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:14:01.356704  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:14:01.356745  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:14:01.370117  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:14:01.370149  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:14:01.440661  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:14:03.941691  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:03.956386  152982 kubeadm.go:597] duration metric: took 4m3.440941217s to restartPrimaryControlPlane
	W0826 12:14:03.956466  152982 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:03.956493  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:04.426489  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:04.441881  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:04.452877  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:04.463304  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:04.463332  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:04.463380  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:04.473208  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:04.473290  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:04.483666  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:04.494051  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:04.494177  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:04.504320  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.514099  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:04.514174  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.524235  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:04.533899  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:04.533984  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:04.544851  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:04.618397  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:14:04.618498  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:04.760383  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:04.760547  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:04.760690  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:14:04.953284  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:04.955371  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:04.955481  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:04.955563  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:04.955664  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:04.955738  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:04.955850  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:04.955953  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:04.956047  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:04.956133  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:04.956239  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:04.956306  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:04.956366  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:04.956455  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:05.401019  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:05.543601  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:05.641242  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:05.716524  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:05.737543  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:05.739428  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:05.739530  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:05.887203  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:05.889144  152982 out.go:235]   - Booting up control plane ...
	I0826 12:14:05.889288  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:05.891248  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:05.892518  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:05.894610  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:05.899134  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:14:45.900198  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:14:45.901204  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:45.901550  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:50.901903  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:50.902179  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:00.902494  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:00.902754  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:20.903394  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:20.903620  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:00.905372  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:00.905692  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:00.905720  152982 kubeadm.go:310] 
	I0826 12:16:00.905753  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:16:00.905784  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:16:00.905791  152982 kubeadm.go:310] 
	I0826 12:16:00.905819  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:16:00.905877  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:16:00.906033  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:16:00.906050  152982 kubeadm.go:310] 
	I0826 12:16:00.906190  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:16:00.906257  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:16:00.906304  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:16:00.906311  152982 kubeadm.go:310] 
	I0826 12:16:00.906444  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:16:00.906687  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:16:00.906700  152982 kubeadm.go:310] 
	I0826 12:16:00.906794  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:16:00.906945  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:16:00.907050  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:16:00.907167  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:16:00.907184  152982 kubeadm.go:310] 
	I0826 12:16:00.907768  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:16:00.907869  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:16:00.907959  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 12:16:00.908103  152982 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 12:16:00.908168  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:16:01.392633  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:16:01.408303  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:16:01.419069  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:16:01.419104  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:16:01.419162  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:16:01.429440  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:16:01.429513  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:16:01.440092  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:16:01.450451  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:16:01.450528  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:16:01.461166  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.472084  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:16:01.472155  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.482791  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:16:01.493636  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:16:01.493737  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:16:01.504679  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:16:01.576700  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:16:01.576854  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:16:01.728501  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:16:01.728682  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:16:01.728846  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:16:01.928072  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:16:01.929877  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:16:01.929988  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:16:01.930128  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:16:01.930271  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:16:01.930373  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:16:01.930484  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:16:01.930593  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:16:01.930680  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:16:01.930766  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:16:01.931012  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:16:01.931363  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:16:01.931414  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:16:01.931593  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:16:02.054133  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:16:02.301995  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:16:02.372665  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:16:02.823940  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:16:02.844516  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:16:02.844641  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:16:02.844724  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:16:02.995838  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:16:02.997571  152982 out.go:235]   - Booting up control plane ...
	I0826 12:16:02.997707  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:16:02.999055  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:16:03.000691  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:16:03.010427  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:16:03.013494  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:16:43.016147  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:16:43.016271  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:43.016481  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:48.016709  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:48.016976  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:58.017776  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:58.018006  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:18.018369  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:18.018592  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.017759  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:58.018053  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.018084  152982 kubeadm.go:310] 
	I0826 12:17:58.018121  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:17:58.018157  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:17:58.018163  152982 kubeadm.go:310] 
	I0826 12:17:58.018192  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:17:58.018224  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:17:58.018310  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:17:58.018337  152982 kubeadm.go:310] 
	I0826 12:17:58.018477  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:17:58.018537  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:17:58.018619  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:17:58.018633  152982 kubeadm.go:310] 
	I0826 12:17:58.018723  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:17:58.018810  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:17:58.018820  152982 kubeadm.go:310] 
	I0826 12:17:58.019007  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:17:58.019157  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:17:58.019291  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:17:58.019403  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:17:58.019414  152982 kubeadm.go:310] 
	I0826 12:17:58.020426  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:17:58.020541  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:17:58.020627  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 12:17:58.020705  152982 kubeadm.go:394] duration metric: took 7m57.559327665s to StartCluster
	I0826 12:17:58.020799  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:17:58.020875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:17:58.061950  152982 cri.go:89] found id: ""
	I0826 12:17:58.061979  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.061989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:17:58.061998  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:17:58.062057  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:17:58.100419  152982 cri.go:89] found id: ""
	I0826 12:17:58.100451  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.100465  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:17:58.100474  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:17:58.100536  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:17:58.135329  152982 cri.go:89] found id: ""
	I0826 12:17:58.135360  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.135369  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:17:58.135378  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:17:58.135472  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:17:58.169826  152982 cri.go:89] found id: ""
	I0826 12:17:58.169858  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.169870  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:17:58.169888  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:17:58.169958  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:17:58.204549  152982 cri.go:89] found id: ""
	I0826 12:17:58.204583  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.204593  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:17:58.204600  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:17:58.204668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:17:58.241886  152982 cri.go:89] found id: ""
	I0826 12:17:58.241917  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.241926  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:17:58.241933  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:17:58.241997  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:17:58.276159  152982 cri.go:89] found id: ""
	I0826 12:17:58.276194  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.276206  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:17:58.276220  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:17:58.276288  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:17:58.311319  152982 cri.go:89] found id: ""
	I0826 12:17:58.311352  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.311364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:17:58.311377  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:17:58.311394  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:17:58.365300  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:17:58.365352  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:17:58.378933  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:17:58.378972  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:17:58.464890  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:17:58.464920  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:17:58.464939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:17:58.581032  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:17:58.581076  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0826 12:17:58.633835  152982 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 12:17:58.633919  152982 out.go:270] * 
	* 
	W0826 12:17:58.634025  152982 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.634049  152982 out.go:270] * 
	* 
	W0826 12:17:58.635201  152982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:17:58.639004  152982 out.go:201] 
	W0826 12:17:58.640230  152982 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.640308  152982 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 12:17:58.640327  152982 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 12:17:58.641876  152982 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-839656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 2 (245.529325ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-839656 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-839656 logs -n 25: (1.674225939s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-585941                                        | pause-585941                 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:01 UTC | 26 Aug 24 12:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956479             | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-923586            | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148783 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	|         | disable-driver-mounts-148783                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:04 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-839656        | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-697869  | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956479                  | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-923586                 | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-839656             | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697869       | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC | 26 Aug 24 12:15 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:06:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:06:55.804794  153366 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:06:55.805114  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805125  153366 out.go:358] Setting ErrFile to fd 2...
	I0826 12:06:55.805129  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805378  153366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:06:55.806009  153366 out.go:352] Setting JSON to false
	I0826 12:06:55.806989  153366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6557,"bootTime":1724667459,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:06:55.807056  153366 start.go:139] virtualization: kvm guest
	I0826 12:06:55.809200  153366 out.go:177] * [default-k8s-diff-port-697869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:06:55.810757  153366 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:06:55.810779  153366 notify.go:220] Checking for updates...
	I0826 12:06:55.813352  153366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:06:55.814876  153366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:06:55.816231  153366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:06:55.817536  153366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:06:55.819049  153366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:06:55.820974  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:06:55.821368  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.821428  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.837973  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0826 12:06:55.838484  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.839113  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.839132  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.839537  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.839758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.840059  153366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:06:55.840392  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.840446  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.855990  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0826 12:06:55.856535  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.857044  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.857070  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.857398  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.857606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.892165  153366 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:06:55.893462  153366 start.go:297] selected driver: kvm2
	I0826 12:06:55.893491  153366 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.893612  153366 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:06:55.894295  153366 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.894372  153366 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:06:55.911403  153366 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:06:55.911782  153366 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:06:55.911825  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:06:55.911833  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:06:55.911942  153366 start.go:340] cluster config:
	{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.912047  153366 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.914819  153366 out.go:177] * Starting "default-k8s-diff-port-697869" primary control-plane node in "default-k8s-diff-port-697869" cluster
	I0826 12:06:58.095139  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:06:55.916120  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:06:55.916158  153366 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:06:55.916168  153366 cache.go:56] Caching tarball of preloaded images
	I0826 12:06:55.916249  153366 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:06:55.916260  153366 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:06:55.916361  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:06:55.916578  153366 start.go:360] acquireMachinesLock for default-k8s-diff-port-697869: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:07:01.167159  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:07.247157  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:10.319093  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:16.399177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:19.471168  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:25.551154  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:28.623156  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:34.703152  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:37.775237  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:43.855164  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:46.927177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:53.007138  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:56.079172  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:02.159134  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:05.231114  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:11.311126  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:14.383170  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:20.463130  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:23.535190  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:29.615145  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:32.687246  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:38.767150  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:41.839214  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:47.919149  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:50.991177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:57.071142  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:00.143127  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:06.223158  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:09.295167  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:12.299677  152550 start.go:364] duration metric: took 4m34.363707329s to acquireMachinesLock for "embed-certs-923586"
	I0826 12:09:12.299740  152550 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:12.299746  152550 fix.go:54] fixHost starting: 
	I0826 12:09:12.300074  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:12.300107  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:12.316195  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0826 12:09:12.316679  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:12.317193  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:09:12.317222  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:12.317544  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:12.317738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:12.317890  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:09:12.319718  152550 fix.go:112] recreateIfNeeded on embed-certs-923586: state=Stopped err=<nil>
	I0826 12:09:12.319757  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	W0826 12:09:12.319928  152550 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:12.322756  152550 out.go:177] * Restarting existing kvm2 VM for "embed-certs-923586" ...
	I0826 12:09:12.324242  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Start
	I0826 12:09:12.324436  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring networks are active...
	I0826 12:09:12.325340  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network default is active
	I0826 12:09:12.325727  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network mk-embed-certs-923586 is active
	I0826 12:09:12.326016  152550 main.go:141] libmachine: (embed-certs-923586) Getting domain xml...
	I0826 12:09:12.326704  152550 main.go:141] libmachine: (embed-certs-923586) Creating domain...
	I0826 12:09:12.297008  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:12.297049  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297404  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:09:12.297433  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297769  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:09:12.299520  152463 machine.go:96] duration metric: took 4m37.402469334s to provisionDockerMachine
	I0826 12:09:12.299563  152463 fix.go:56] duration metric: took 4m37.426061512s for fixHost
	I0826 12:09:12.299570  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 4m37.426083406s
	W0826 12:09:12.299602  152463 start.go:714] error starting host: provision: host is not running
	W0826 12:09:12.299700  152463 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0826 12:09:12.299714  152463 start.go:729] Will try again in 5 seconds ...
	I0826 12:09:13.587774  152550 main.go:141] libmachine: (embed-certs-923586) Waiting to get IP...
	I0826 12:09:13.588804  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.589502  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.589606  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.589472  153863 retry.go:31] will retry after 233.612197ms: waiting for machine to come up
	I0826 12:09:13.825289  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.825694  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.825716  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.825640  153863 retry.go:31] will retry after 278.757003ms: waiting for machine to come up
	I0826 12:09:14.106215  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.106555  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.106604  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.106513  153863 retry.go:31] will retry after 438.455545ms: waiting for machine to come up
	I0826 12:09:14.546036  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.546434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.546461  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.546390  153863 retry.go:31] will retry after 471.25312ms: waiting for machine to come up
	I0826 12:09:15.019018  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.019413  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.019441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.019398  153863 retry.go:31] will retry after 547.251596ms: waiting for machine to come up
	I0826 12:09:15.568156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.568417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.568446  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.568366  153863 retry.go:31] will retry after 602.422279ms: waiting for machine to come up
	I0826 12:09:16.172056  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:16.172588  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:16.172613  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:16.172520  153863 retry.go:31] will retry after 990.562884ms: waiting for machine to come up
	I0826 12:09:17.164920  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:17.165417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:17.165441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:17.165361  153863 retry.go:31] will retry after 1.291254906s: waiting for machine to come up
	I0826 12:09:17.301413  152463 start.go:360] acquireMachinesLock for no-preload-956479: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:09:18.458402  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:18.458881  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:18.458913  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:18.458796  153863 retry.go:31] will retry after 1.757955514s: waiting for machine to come up
	I0826 12:09:20.218876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:20.219306  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:20.219329  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:20.219276  153863 retry.go:31] will retry after 1.629705685s: waiting for machine to come up
	I0826 12:09:21.850442  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:21.850858  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:21.850889  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:21.850800  153863 retry.go:31] will retry after 2.281035685s: waiting for machine to come up
	I0826 12:09:24.133867  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:24.134245  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:24.134273  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:24.134193  153863 retry.go:31] will retry after 3.498910639s: waiting for machine to come up
	I0826 12:09:27.635304  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:27.635727  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:27.635762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:27.635665  153863 retry.go:31] will retry after 3.250723751s: waiting for machine to come up
	I0826 12:09:32.191598  152982 start.go:364] duration metric: took 3m50.364189217s to acquireMachinesLock for "old-k8s-version-839656"
	I0826 12:09:32.191690  152982 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:32.191702  152982 fix.go:54] fixHost starting: 
	I0826 12:09:32.192120  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:32.192160  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:32.209470  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0826 12:09:32.209924  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:32.210423  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:09:32.210446  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:32.210781  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:32.210982  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:32.211153  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetState
	I0826 12:09:32.212801  152982 fix.go:112] recreateIfNeeded on old-k8s-version-839656: state=Stopped err=<nil>
	I0826 12:09:32.212839  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	W0826 12:09:32.213022  152982 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:32.215081  152982 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-839656" ...
	I0826 12:09:30.890060  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890595  152550 main.go:141] libmachine: (embed-certs-923586) Found IP for machine: 192.168.39.6
	I0826 12:09:30.890628  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has current primary IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890642  152550 main.go:141] libmachine: (embed-certs-923586) Reserving static IP address...
	I0826 12:09:30.891114  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.891138  152550 main.go:141] libmachine: (embed-certs-923586) DBG | skip adding static IP to network mk-embed-certs-923586 - found existing host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"}
	I0826 12:09:30.891148  152550 main.go:141] libmachine: (embed-certs-923586) Reserved static IP address: 192.168.39.6
	I0826 12:09:30.891160  152550 main.go:141] libmachine: (embed-certs-923586) Waiting for SSH to be available...
	I0826 12:09:30.891171  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Getting to WaitForSSH function...
	I0826 12:09:30.893189  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893470  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.893500  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893616  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH client type: external
	I0826 12:09:30.893640  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa (-rw-------)
	I0826 12:09:30.893682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:30.893696  152550 main.go:141] libmachine: (embed-certs-923586) DBG | About to run SSH command:
	I0826 12:09:30.893714  152550 main.go:141] libmachine: (embed-certs-923586) DBG | exit 0
	I0826 12:09:31.014809  152550 main.go:141] libmachine: (embed-certs-923586) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:31.015188  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetConfigRaw
	I0826 12:09:31.015829  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.018458  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.018812  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.018855  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.019100  152550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/config.json ...
	I0826 12:09:31.019329  152550 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:31.019348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.019561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.021826  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022132  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.022156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.022460  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022622  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022733  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.022906  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.023108  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.023121  152550 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:31.123039  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:31.123080  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123410  152550 buildroot.go:166] provisioning hostname "embed-certs-923586"
	I0826 12:09:31.123443  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.126455  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126777  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.126814  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126922  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.127161  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127351  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127522  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.127719  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.127909  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.127924  152550 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-923586 && echo "embed-certs-923586" | sudo tee /etc/hostname
	I0826 12:09:31.240946  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-923586
	
	I0826 12:09:31.240981  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.243695  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244041  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.244079  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244240  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.244453  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244617  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244742  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.244900  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.245095  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.245113  152550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-923586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-923586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-923586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:31.355875  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:31.355909  152550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:31.355933  152550 buildroot.go:174] setting up certificates
	I0826 12:09:31.355947  152550 provision.go:84] configureAuth start
	I0826 12:09:31.355960  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.356300  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.359092  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.359407  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359596  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.362078  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362396  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.362429  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362538  152550 provision.go:143] copyHostCerts
	I0826 12:09:31.362632  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:31.362656  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:31.362743  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:31.362888  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:31.362900  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:31.362939  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:31.363021  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:31.363031  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:31.363065  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:31.363135  152550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.embed-certs-923586 san=[127.0.0.1 192.168.39.6 embed-certs-923586 localhost minikube]
	I0826 12:09:31.549410  152550 provision.go:177] copyRemoteCerts
	I0826 12:09:31.549482  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:31.549517  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.552293  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552647  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.552681  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552914  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.553119  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.553276  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.553416  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:31.633032  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:31.657117  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:09:31.680707  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:31.703441  152550 provision.go:87] duration metric: took 347.478825ms to configureAuth
	I0826 12:09:31.703477  152550 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:31.703678  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:09:31.703752  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.706384  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.706876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.706909  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.707110  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.707364  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707762  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.708005  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.708232  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.708252  152550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:31.963380  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:31.963417  152550 machine.go:96] duration metric: took 944.071305ms to provisionDockerMachine
	I0826 12:09:31.963435  152550 start.go:293] postStartSetup for "embed-certs-923586" (driver="kvm2")
	I0826 12:09:31.963452  152550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:31.963481  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.963878  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:31.963913  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.966558  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.966981  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.967010  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.967186  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.967413  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.967587  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.967732  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.049232  152550 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:32.053165  152550 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:32.053195  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:32.053278  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:32.053378  152550 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:32.053495  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:32.062420  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:32.085277  152550 start.go:296] duration metric: took 121.824784ms for postStartSetup
	I0826 12:09:32.085335  152550 fix.go:56] duration metric: took 19.785587858s for fixHost
	I0826 12:09:32.085362  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.088039  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088332  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.088360  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088560  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.088832  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089012  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089191  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.089365  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:32.089529  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:32.089539  152550 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:32.191413  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674172.168471460
	
	I0826 12:09:32.191440  152550 fix.go:216] guest clock: 1724674172.168471460
	I0826 12:09:32.191450  152550 fix.go:229] Guest: 2024-08-26 12:09:32.16847146 +0000 UTC Remote: 2024-08-26 12:09:32.085340981 +0000 UTC m=+294.301169364 (delta=83.130479ms)
	I0826 12:09:32.191485  152550 fix.go:200] guest clock delta is within tolerance: 83.130479ms
	I0826 12:09:32.191493  152550 start.go:83] releasing machines lock for "embed-certs-923586", held for 19.891774014s
	I0826 12:09:32.191526  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.191861  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:32.194589  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.194980  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.195019  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.195207  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.195866  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196071  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196167  152550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:32.196288  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.196319  152550 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:32.196348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.199088  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199546  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.199598  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199776  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.199977  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200105  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.200124  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.200148  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200317  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.200367  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.200482  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200663  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200824  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.285244  152550 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:32.317027  152550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:32.466233  152550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:32.472677  152550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:32.472768  152550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:32.490080  152550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:32.490111  152550 start.go:495] detecting cgroup driver to use...
	I0826 12:09:32.490189  152550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:32.509031  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:32.524361  152550 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:32.524417  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:32.539259  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:32.553276  152550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:32.676018  152550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:32.833702  152550 docker.go:233] disabling docker service ...
	I0826 12:09:32.833779  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:32.851253  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:32.865578  152550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:33.000922  152550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:33.129916  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:33.144209  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:33.162946  152550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:09:33.163010  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.174271  152550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:33.174360  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.189085  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.204388  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.218151  152550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:33.234931  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.257016  152550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.280905  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.293033  152550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:33.303161  152550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:33.303235  152550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:33.316560  152550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:33.326319  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:33.449279  152550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:33.587642  152550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:33.587722  152550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:33.592423  152550 start.go:563] Will wait 60s for crictl version
	I0826 12:09:33.592495  152550 ssh_runner.go:195] Run: which crictl
	I0826 12:09:33.596628  152550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:33.633109  152550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:33.633225  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.661128  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.692222  152550 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:09:32.216396  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .Start
	I0826 12:09:32.216630  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring networks are active...
	I0826 12:09:32.217414  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network default is active
	I0826 12:09:32.217851  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network mk-old-k8s-version-839656 is active
	I0826 12:09:32.218286  152982 main.go:141] libmachine: (old-k8s-version-839656) Getting domain xml...
	I0826 12:09:32.219128  152982 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 12:09:33.500501  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting to get IP...
	I0826 12:09:33.501678  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.502100  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.502202  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.502072  154009 retry.go:31] will retry after 193.282008ms: waiting for machine to come up
	I0826 12:09:33.697223  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.697688  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.697760  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.697669  154009 retry.go:31] will retry after 252.110347ms: waiting for machine to come up
	I0826 12:09:33.951330  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.952639  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.952677  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.952616  154009 retry.go:31] will retry after 436.954293ms: waiting for machine to come up
	I0826 12:09:34.391109  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.391724  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.391759  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.391676  154009 retry.go:31] will retry after 402.13367ms: waiting for machine to come up
	I0826 12:09:34.795471  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.796036  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.796060  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.795991  154009 retry.go:31] will retry after 738.867168ms: waiting for machine to come up
	I0826 12:09:35.537041  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:35.537518  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:35.537539  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:35.537476  154009 retry.go:31] will retry after 884.001928ms: waiting for machine to come up
	I0826 12:09:36.423984  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:36.424400  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:36.424432  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:36.424336  154009 retry.go:31] will retry after 958.887984ms: waiting for machine to come up
	I0826 12:09:33.693650  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:33.696950  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:33.697385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697661  152550 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:33.701975  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:33.715404  152550 kubeadm.go:883] updating cluster {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:33.715541  152550 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:09:33.715646  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:33.756477  152550 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:09:33.756546  152550 ssh_runner.go:195] Run: which lz4
	I0826 12:09:33.761027  152550 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:33.765139  152550 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:33.765181  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:09:35.106552  152550 crio.go:462] duration metric: took 1.345552742s to copy over tarball
	I0826 12:09:35.106656  152550 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:37.299491  152550 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.192805053s)
	I0826 12:09:37.299548  152550 crio.go:469] duration metric: took 2.192938832s to extract the tarball
	I0826 12:09:37.299560  152550 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:37.337654  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:37.378117  152550 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:09:37.378144  152550 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:09:37.378155  152550 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0826 12:09:37.378276  152550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-923586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:37.378375  152550 ssh_runner.go:195] Run: crio config
	I0826 12:09:37.438148  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:37.438182  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:37.438200  152550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:37.438229  152550 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-923586 NodeName:embed-certs-923586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:09:37.438436  152550 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-923586"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:37.438525  152550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:09:37.451742  152550 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:37.451824  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:37.463078  152550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0826 12:09:37.481563  152550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:37.499615  152550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0826 12:09:37.518753  152550 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:37.523612  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:37.535774  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:37.664131  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:37.681227  152550 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586 for IP: 192.168.39.6
	I0826 12:09:37.681254  152550 certs.go:194] generating shared ca certs ...
	I0826 12:09:37.681293  152550 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:37.681467  152550 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:37.681529  152550 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:37.681542  152550 certs.go:256] generating profile certs ...
	I0826 12:09:37.681665  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/client.key
	I0826 12:09:37.681751  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key.f0cd25f6
	I0826 12:09:37.681813  152550 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key
	I0826 12:09:37.681967  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:37.682018  152550 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:37.682029  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:37.682064  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:37.682100  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:37.682136  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:37.682199  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:37.683214  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:37.721802  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:37.756110  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:09:37.786038  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:09:37.818026  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0826 12:09:37.385261  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:37.385737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:37.385767  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:37.385679  154009 retry.go:31] will retry after 991.322442ms: waiting for machine to come up
	I0826 12:09:38.379002  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:38.379428  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:38.379457  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:38.379382  154009 retry.go:31] will retry after 1.199531339s: waiting for machine to come up
	I0826 12:09:39.581068  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:39.581551  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:39.581581  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:39.581506  154009 retry.go:31] will retry after 1.74680502s: waiting for machine to come up
	I0826 12:09:41.330775  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:41.331224  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:41.331254  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:41.331170  154009 retry.go:31] will retry after 2.648889988s: waiting for machine to come up
	I0826 12:09:37.843982  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:09:37.869902  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:09:37.893757  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:09:37.917320  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:09:37.940492  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:09:37.964211  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:09:37.987907  152550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:09:38.004414  152550 ssh_runner.go:195] Run: openssl version
	I0826 12:09:38.010144  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:09:38.020820  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025245  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025324  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.031174  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:09:38.041847  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:09:38.052764  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057501  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057591  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.063840  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:09:38.075173  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:09:38.085770  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089921  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089986  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.095373  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:09:38.105709  152550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:09:38.110189  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:09:38.115952  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:09:38.121463  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:09:38.127423  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:09:38.132968  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:09:38.138735  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:09:38.144517  152550 kubeadm.go:392] StartCluster: {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:09:38.144671  152550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:09:38.144748  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.179325  152550 cri.go:89] found id: ""
	I0826 12:09:38.179409  152550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:09:38.189261  152550 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:09:38.189296  152550 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:09:38.189368  152550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:09:38.198923  152550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:09:38.200065  152550 kubeconfig.go:125] found "embed-certs-923586" server: "https://192.168.39.6:8443"
	I0826 12:09:38.202145  152550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:09:38.211371  152550 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.6
	I0826 12:09:38.211415  152550 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:09:38.211431  152550 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:09:38.211501  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.245861  152550 cri.go:89] found id: ""
	I0826 12:09:38.245945  152550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:09:38.262469  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:09:38.272693  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:09:38.272721  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:09:38.272780  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:09:38.281704  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:09:38.281779  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:09:38.291042  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:09:38.299990  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:09:38.300057  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:09:38.309982  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.319474  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:09:38.319536  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.329345  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:09:38.338548  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:09:38.338649  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:09:38.349124  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:09:38.359112  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:38.470240  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.758142  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.28785788s)
	I0826 12:09:39.758180  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.973482  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.044459  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.143679  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:09:40.143844  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:40.644217  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.144357  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.160970  152550 api_server.go:72] duration metric: took 1.017300298s to wait for apiserver process to appear ...
	I0826 12:09:41.161005  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:09:41.161032  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.548928  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.548971  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.548988  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.580924  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.580991  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.661191  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.667248  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:43.667278  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.161959  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.177173  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.177216  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.661798  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.668406  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.668456  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:45.162005  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:45.168111  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:09:45.174487  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:09:45.174525  152550 api_server.go:131] duration metric: took 4.013513808s to wait for apiserver health ...
	I0826 12:09:45.174536  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:45.174543  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:45.176809  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:09:43.982234  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:43.982681  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:43.982714  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:43.982593  154009 retry.go:31] will retry after 2.916473093s: waiting for machine to come up
	I0826 12:09:45.178235  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:09:45.189704  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:09:45.250046  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:09:45.262420  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:09:45.262460  152550 system_pods.go:61] "coredns-6f6b679f8f-h4wmk" [39b276c0-68ef-4dc9-9f73-ee79c2c14625] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262467  152550 system_pods.go:61] "coredns-6f6b679f8f-l5z8f" [7e0082cc-2364-499c-bdb8-5f2ee7ee5fa7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262473  152550 system_pods.go:61] "etcd-embed-certs-923586" [06d68f69-a99f-4b34-87c7-e2fb80cdd886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:09:45.262481  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [2d0952e2-f5d9-49e8-b957-00f92dbbc436] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:09:45.262490  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [2e632e39-6249-40e3-82ab-74e820a84f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:09:45.262495  152550 system_pods.go:61] "kube-proxy-wfl6s" [9f690d4f-11ee-4e67-aa8a-2c3e304d699d] Running
	I0826 12:09:45.262500  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [47d66689-0a4c-4811-b4f0-2481034f1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:09:45.262505  152550 system_pods.go:61] "metrics-server-6867b74b74-cw5t8" [1bced435-db48-46d6-9c76-fb13050a7851] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:09:45.262510  152550 system_pods.go:61] "storage-provisioner" [259f7851-96da-42c3-aae3-35d13ec21573] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:09:45.262522  152550 system_pods.go:74] duration metric: took 12.449002ms to wait for pod list to return data ...
	I0826 12:09:45.262531  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:09:45.276323  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:09:45.276359  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:09:45.276372  152550 node_conditions.go:105] duration metric: took 13.836307ms to run NodePressure ...
	I0826 12:09:45.276389  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:45.558970  152550 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563147  152550 kubeadm.go:739] kubelet initialised
	I0826 12:09:45.563168  152550 kubeadm.go:740] duration metric: took 4.16477ms waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563176  152550 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:09:45.574933  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.581504  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581530  152550 pod_ready.go:82] duration metric: took 6.568456ms for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.581548  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581557  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.587904  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587935  152550 pod_ready.go:82] duration metric: took 6.368664ms for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.587945  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587956  152550 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.592416  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592440  152550 pod_ready.go:82] duration metric: took 4.475923ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.592448  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592453  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.654230  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654265  152550 pod_ready.go:82] duration metric: took 61.80344ms for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.654275  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654282  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:47.659899  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:46.902687  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:46.903209  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:46.903243  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:46.903150  154009 retry.go:31] will retry after 4.06528556s: waiting for machine to come up
	I0826 12:09:50.972745  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973257  152982 main.go:141] libmachine: (old-k8s-version-839656) Found IP for machine: 192.168.72.136
	I0826 12:09:50.973280  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserving static IP address...
	I0826 12:09:50.973297  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has current primary IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.973653  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | skip adding static IP to network mk-old-k8s-version-839656 - found existing host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"}
	I0826 12:09:50.973672  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserved static IP address: 192.168.72.136
	I0826 12:09:50.973693  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting for SSH to be available...
	I0826 12:09:50.973737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Getting to WaitForSSH function...
	I0826 12:09:50.976028  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976406  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.976438  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976544  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH client type: external
	I0826 12:09:50.976598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa (-rw-------)
	I0826 12:09:50.976622  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:50.976632  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | About to run SSH command:
	I0826 12:09:50.976642  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | exit 0
	I0826 12:09:51.107476  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:51.107964  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 12:09:51.108748  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.111740  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112251  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.112281  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112613  152982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 12:09:51.112820  152982 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:51.112842  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.113094  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.115616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116011  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.116042  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116213  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.116382  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116483  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116618  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.116815  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.117105  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.117120  152982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:51.219189  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:51.219220  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219528  152982 buildroot.go:166] provisioning hostname "old-k8s-version-839656"
	I0826 12:09:51.219558  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219798  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.222773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223300  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.223337  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223511  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.223750  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.223975  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.224156  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.224364  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.224610  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.224625  152982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-839656 && echo "old-k8s-version-839656" | sudo tee /etc/hostname
	I0826 12:09:51.340951  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-839656
	
	I0826 12:09:51.340995  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.343773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344119  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.344144  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344312  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.344531  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344731  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344865  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.345037  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.345207  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.345224  152982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-839656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-839656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-839656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:51.456135  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:51.456180  152982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:51.456233  152982 buildroot.go:174] setting up certificates
	I0826 12:09:51.456247  152982 provision.go:84] configureAuth start
	I0826 12:09:51.456263  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.456585  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.459426  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.459852  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.459895  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.460083  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.462404  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462754  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.462788  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462984  152982 provision.go:143] copyHostCerts
	I0826 12:09:51.463042  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:51.463061  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:51.463118  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:51.463225  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:51.463235  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:51.463255  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:51.463306  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:51.463313  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:51.463331  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:51.463381  152982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-839656 san=[127.0.0.1 192.168.72.136 localhost minikube old-k8s-version-839656]
	I0826 12:09:51.533462  152982 provision.go:177] copyRemoteCerts
	I0826 12:09:51.533528  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:51.533556  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.536586  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.536967  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.536991  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.537268  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.537519  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.537729  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.537894  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:51.617503  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:51.642966  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0826 12:09:51.669120  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:51.693595  152982 provision.go:87] duration metric: took 237.331736ms to configureAuth
	I0826 12:09:51.693629  152982 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:51.693808  152982 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:09:51.693895  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.697161  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697508  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.697553  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697789  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.698042  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698207  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698394  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.698565  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.698798  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.698819  152982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:52.187972  153366 start.go:364] duration metric: took 2m56.271360342s to acquireMachinesLock for "default-k8s-diff-port-697869"
	I0826 12:09:52.188045  153366 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:52.188053  153366 fix.go:54] fixHost starting: 
	I0826 12:09:52.188497  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:52.188541  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:52.209451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0826 12:09:52.209960  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:52.210572  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:09:52.210591  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:52.211008  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:52.211232  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:09:52.211382  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:09:52.213165  153366 fix.go:112] recreateIfNeeded on default-k8s-diff-port-697869: state=Stopped err=<nil>
	I0826 12:09:52.213198  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	W0826 12:09:52.213359  153366 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:52.215535  153366 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-697869" ...
	I0826 12:09:49.662002  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.663287  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.959544  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:51.959580  152982 machine.go:96] duration metric: took 846.74482ms to provisionDockerMachine
	I0826 12:09:51.959595  152982 start.go:293] postStartSetup for "old-k8s-version-839656" (driver="kvm2")
	I0826 12:09:51.959606  152982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:51.959628  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.959989  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:51.960024  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.962912  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963278  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.963304  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963520  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.963756  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.963954  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.964082  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.046059  152982 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:52.050013  152982 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:52.050045  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:52.050119  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:52.050225  152982 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:52.050345  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:52.059871  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:52.082494  152982 start.go:296] duration metric: took 122.880191ms for postStartSetup
	I0826 12:09:52.082546  152982 fix.go:56] duration metric: took 19.890844987s for fixHost
	I0826 12:09:52.082576  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.085291  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085726  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.085772  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085898  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.086116  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086307  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086457  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.086659  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:52.086841  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:52.086856  152982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:52.187806  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674192.159623045
	
	I0826 12:09:52.187839  152982 fix.go:216] guest clock: 1724674192.159623045
	I0826 12:09:52.187846  152982 fix.go:229] Guest: 2024-08-26 12:09:52.159623045 +0000 UTC Remote: 2024-08-26 12:09:52.082552402 +0000 UTC m=+250.413281630 (delta=77.070643ms)
	I0826 12:09:52.187868  152982 fix.go:200] guest clock delta is within tolerance: 77.070643ms
	I0826 12:09:52.187873  152982 start.go:83] releasing machines lock for "old-k8s-version-839656", held for 19.996211523s
	I0826 12:09:52.187905  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.188210  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:52.191003  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191480  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.191511  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191670  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192375  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192595  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192733  152982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:52.192794  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.192854  152982 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:52.192883  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.195598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195757  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195965  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.195994  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196172  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196256  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.196290  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196424  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196463  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196624  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196627  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196812  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196842  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.196954  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.304741  152982 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:52.311072  152982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:52.457568  152982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:52.465123  152982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:52.465211  152982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:52.487320  152982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:52.487351  152982 start.go:495] detecting cgroup driver to use...
	I0826 12:09:52.487459  152982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:52.509680  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:52.526517  152982 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:52.526615  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:52.540741  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:52.554819  152982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:52.677611  152982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:52.829605  152982 docker.go:233] disabling docker service ...
	I0826 12:09:52.829706  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:52.844862  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:52.859869  152982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:53.021968  152982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:53.156768  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:53.173028  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:53.194573  152982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 12:09:53.194641  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.204783  152982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:53.204873  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.215395  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.225578  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.235810  152982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:53.246635  152982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:53.257399  152982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:53.257467  152982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:53.273553  152982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:53.283339  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:53.432394  152982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:53.583340  152982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:53.583443  152982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:53.590729  152982 start.go:563] Will wait 60s for crictl version
	I0826 12:09:53.590877  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:53.596292  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:53.656413  152982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:53.656523  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.685569  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.716571  152982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 12:09:52.217358  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Start
	I0826 12:09:52.217561  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring networks are active...
	I0826 12:09:52.218523  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network default is active
	I0826 12:09:52.218930  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network mk-default-k8s-diff-port-697869 is active
	I0826 12:09:52.219443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Getting domain xml...
	I0826 12:09:52.220240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Creating domain...
	I0826 12:09:53.637205  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting to get IP...
	I0826 12:09:53.638259  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638719  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638757  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.638648  154153 retry.go:31] will retry after 309.073725ms: waiting for machine to come up
	I0826 12:09:53.949323  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.949986  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.950021  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.949941  154153 retry.go:31] will retry after 389.554302ms: waiting for machine to come up
	I0826 12:09:54.341836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342416  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.342359  154153 retry.go:31] will retry after 314.065385ms: waiting for machine to come up
	I0826 12:09:54.657763  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658394  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658425  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.658336  154153 retry.go:31] will retry after 564.24487ms: waiting for machine to come up
	I0826 12:09:55.224230  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224610  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.224578  154153 retry.go:31] will retry after 685.123739ms: waiting for machine to come up
	I0826 12:09:53.718104  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:53.721461  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.721900  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:53.721938  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.722137  152982 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:53.726404  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:53.738999  152982 kubeadm.go:883] updating cluster {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:53.739130  152982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 12:09:53.739182  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:53.791456  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:53.791561  152982 ssh_runner.go:195] Run: which lz4
	I0826 12:09:53.795624  152982 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:53.799857  152982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:53.799892  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 12:09:55.402637  152982 crio.go:462] duration metric: took 1.607044522s to copy over tarball
	I0826 12:09:55.402746  152982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:54.163063  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.662394  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.662428  152550 pod_ready.go:82] duration metric: took 10.008136426s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.662445  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668522  152550 pod_ready.go:93] pod "kube-proxy-wfl6s" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.668557  152550 pod_ready.go:82] duration metric: took 6.10318ms for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668571  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:57.675036  152550 pod_ready.go:103] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.911914  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912484  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.912420  154153 retry.go:31] will retry after 578.675355ms: waiting for machine to come up
	I0826 12:09:56.493183  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493668  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:56.493552  154153 retry.go:31] will retry after 793.710444ms: waiting for machine to come up
	I0826 12:09:57.289554  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290128  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290160  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:57.290070  154153 retry.go:31] will retry after 1.099676217s: waiting for machine to come up
	I0826 12:09:58.391500  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392029  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392060  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:58.391966  154153 retry.go:31] will retry after 1.753296062s: waiting for machine to come up
	I0826 12:10:00.148179  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148759  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148795  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:00.148689  154153 retry.go:31] will retry after 1.591840738s: waiting for machine to come up
	I0826 12:09:58.462705  152982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059925579s)
	I0826 12:09:58.462738  152982 crio.go:469] duration metric: took 3.060066141s to extract the tarball
	I0826 12:09:58.462748  152982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:58.504763  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:58.547876  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:58.547908  152982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:09:58.548002  152982 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.548020  152982 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.548047  152982 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.548058  152982 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.548025  152982 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.548107  152982 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.548041  152982 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 12:09:58.548004  152982 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550035  152982 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.550050  152982 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.550064  152982 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.550039  152982 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 12:09:58.550090  152982 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550045  152982 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.550125  152982 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.550071  152982 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.785285  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.798866  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 12:09:58.801333  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.803488  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.845454  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.845683  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.851257  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.875512  152982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 12:09:58.875632  152982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.875702  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.899151  152982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 12:09:58.899204  152982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 12:09:58.899268  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.947547  152982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 12:09:58.947602  152982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.947657  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.960126  152982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 12:09:58.960178  152982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.960229  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.978450  152982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 12:09:58.978504  152982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.978571  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.981296  152982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 12:09:58.981335  152982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.981378  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990296  152982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 12:09:58.990341  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.990351  152982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.990398  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990481  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:58.990549  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.990624  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.993238  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.993297  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.117393  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.117394  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.137340  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.137381  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.137396  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.139282  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.140553  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.237314  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.242110  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.293209  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.293288  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.310442  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.316239  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.316345  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.382180  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.382851  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:59.389447  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 12:09:59.454424  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 12:09:59.484709  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 12:09:59.491496  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 12:09:59.491517  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 12:09:59.491555  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 12:09:59.495411  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 12:09:59.614016  152982 cache_images.go:92] duration metric: took 1.066082637s to LoadCachedImages
	W0826 12:09:59.614118  152982 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0826 12:09:59.614133  152982 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.20.0 crio true true} ...
	I0826 12:09:59.614248  152982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-839656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:59.614345  152982 ssh_runner.go:195] Run: crio config
	I0826 12:09:59.661938  152982 cni.go:84] Creating CNI manager for ""
	I0826 12:09:59.661962  152982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:59.661975  152982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:59.661994  152982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-839656 NodeName:old-k8s-version-839656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 12:09:59.662131  152982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-839656"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:59.662212  152982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 12:09:59.672820  152982 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:59.672907  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:59.682949  152982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0826 12:09:59.701705  152982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:59.719839  152982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0826 12:09:59.737712  152982 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:59.741301  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:59.753857  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:59.877969  152982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:59.896278  152982 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656 for IP: 192.168.72.136
	I0826 12:09:59.896306  152982 certs.go:194] generating shared ca certs ...
	I0826 12:09:59.896337  152982 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:59.896522  152982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:59.896620  152982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:59.896640  152982 certs.go:256] generating profile certs ...
	I0826 12:09:59.896769  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key
	I0826 12:09:59.896903  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261
	I0826 12:09:59.896972  152982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key
	I0826 12:09:59.897126  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:59.897165  152982 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:59.897178  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:59.897216  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:59.897261  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:59.897303  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:59.897362  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:59.898051  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:59.938407  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:59.983455  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:00.021803  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:00.058157  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 12:10:00.095920  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:00.133185  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:00.167537  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:00.193940  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:00.220558  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:00.245567  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:00.274758  152982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:00.296741  152982 ssh_runner.go:195] Run: openssl version
	I0826 12:10:00.305185  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:00.321395  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326339  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326422  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.332789  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:00.343971  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:00.355979  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360900  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360985  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.367085  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:00.379942  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:00.391907  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396769  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396845  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.403009  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:00.416262  152982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:00.422105  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:00.428526  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:00.435241  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:00.441902  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:00.448502  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:00.455012  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:00.461390  152982 kubeadm.go:392] StartCluster: {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:00.461533  152982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:00.461596  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.503939  152982 cri.go:89] found id: ""
	I0826 12:10:00.504026  152982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:00.515410  152982 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:00.515434  152982 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:00.515483  152982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:00.527240  152982 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:00.528558  152982 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:10:00.529540  152982 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-839656" cluster setting kubeconfig missing "old-k8s-version-839656" context setting]
	I0826 12:10:00.530977  152982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:00.618477  152982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:00.630233  152982 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
	I0826 12:10:00.630283  152982 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:00.630300  152982 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:00.630367  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.667438  152982 cri.go:89] found id: ""
	I0826 12:10:00.667535  152982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:00.685319  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:00.695968  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:00.696003  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:00.696087  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:00.706519  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:00.706583  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:00.716807  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:00.726555  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:00.726637  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:00.737356  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.747702  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:00.747773  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.758771  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:00.769257  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:00.769345  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:00.780102  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:00.791976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:00.922432  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:58.196998  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:58.197024  152550 pod_ready.go:82] duration metric: took 2.528445128s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:58.197035  152550 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:00.486854  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:02.704500  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:01.741774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742399  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:01.742299  154153 retry.go:31] will retry after 2.754846919s: waiting for machine to come up
	I0826 12:10:04.499575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499918  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499950  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:04.499866  154153 retry.go:31] will retry after 2.260097113s: waiting for machine to come up
	I0826 12:10:02.146027  152982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223548629s)
	I0826 12:10:02.146087  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.407469  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.511616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.629123  152982 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:02.629250  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.129448  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.629685  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.129759  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.629807  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.129526  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.629782  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.129949  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.630031  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.203846  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:07.703046  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:06.761311  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761805  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:06.761731  154153 retry.go:31] will retry after 3.424580644s: waiting for machine to come up
	I0826 12:10:10.188178  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188746  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has current primary IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Found IP for machine: 192.168.61.11
	I0826 12:10:10.188789  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserving static IP address...
	I0826 12:10:10.189233  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.189270  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | skip adding static IP to network mk-default-k8s-diff-port-697869 - found existing host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"}
	I0826 12:10:10.189292  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserved static IP address: 192.168.61.11
	I0826 12:10:10.189312  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for SSH to be available...
	I0826 12:10:10.189327  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Getting to WaitForSSH function...
	I0826 12:10:10.191775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192162  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.192192  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192272  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH client type: external
	I0826 12:10:10.192300  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa (-rw-------)
	I0826 12:10:10.192332  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:10.192351  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | About to run SSH command:
	I0826 12:10:10.192364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | exit 0
	I0826 12:10:10.315078  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:10.315506  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetConfigRaw
	I0826 12:10:10.316191  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.318850  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319207  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.319235  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319491  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:10:10.319715  153366 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:10.319736  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:10.320045  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.322352  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322660  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.322682  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322852  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.323067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323216  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323371  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.323524  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.323732  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.323745  153366 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:10.427284  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:10.427314  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427630  153366 buildroot.go:166] provisioning hostname "default-k8s-diff-port-697869"
	I0826 12:10:10.427661  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.430485  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.430865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.430894  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.431065  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.431240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431388  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431507  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.431631  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.431804  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.431818  153366 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-697869 && echo "default-k8s-diff-port-697869" | sudo tee /etc/hostname
	I0826 12:10:10.544414  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-697869
	
	I0826 12:10:10.544455  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.547901  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548333  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.548375  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548612  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.548835  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549074  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549250  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.549458  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.549632  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.549648  153366 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-697869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-697869/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-697869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:10.659809  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:10.659858  153366 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:10.659937  153366 buildroot.go:174] setting up certificates
	I0826 12:10:10.659957  153366 provision.go:84] configureAuth start
	I0826 12:10:10.659978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.660304  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.663231  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.663628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663798  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.666261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666603  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.666630  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666827  153366 provision.go:143] copyHostCerts
	I0826 12:10:10.666912  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:10.666937  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:10.667005  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:10.667125  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:10.667137  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:10.667164  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:10.667239  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:10.667249  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:10.667273  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:10.667344  153366 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-697869 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-697869 localhost minikube]
	I0826 12:10:11.491531  152463 start.go:364] duration metric: took 54.190046907s to acquireMachinesLock for "no-preload-956479"
	I0826 12:10:11.491592  152463 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:10:11.491601  152463 fix.go:54] fixHost starting: 
	I0826 12:10:11.492032  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:10:11.492066  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:10:11.509260  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
	I0826 12:10:11.509870  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:10:11.510401  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:10:11.510433  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:10:11.510772  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:10:11.510983  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:11.511151  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:10:11.513024  152463 fix.go:112] recreateIfNeeded on no-preload-956479: state=Stopped err=<nil>
	I0826 12:10:11.513048  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	W0826 12:10:11.513218  152463 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:10:11.515241  152463 out.go:177] * Restarting existing kvm2 VM for "no-preload-956479" ...
	I0826 12:10:07.129729  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:07.629445  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.129308  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.629701  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.130082  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.629958  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.129963  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.629747  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.130061  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.630060  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.703400  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:11.703487  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:10.808804  153366 provision.go:177] copyRemoteCerts
	I0826 12:10:10.808865  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:10.808893  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.811758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812215  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.812251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812451  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.812664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.812817  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.813020  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:10.905741  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:10.931863  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0826 12:10:10.958232  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:10.983737  153366 provision.go:87] duration metric: took 323.761817ms to configureAuth
	I0826 12:10:10.983774  153366 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:10.983992  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:10.984092  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.986976  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987357  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.987386  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.987842  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.987978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.988105  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.988276  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.988443  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.988459  153366 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:11.257812  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:11.257846  153366 machine.go:96] duration metric: took 938.116965ms to provisionDockerMachine
	I0826 12:10:11.257861  153366 start.go:293] postStartSetup for "default-k8s-diff-port-697869" (driver="kvm2")
	I0826 12:10:11.257872  153366 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:11.257889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.258214  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:11.258246  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.261404  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261680  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.261702  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261886  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.262067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.262214  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.262386  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.345667  153366 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:11.349967  153366 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:11.350004  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:11.350084  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:11.350186  153366 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:11.350308  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:11.361671  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:11.386178  153366 start.go:296] duration metric: took 128.298803ms for postStartSetup
	I0826 12:10:11.386233  153366 fix.go:56] duration metric: took 19.198180603s for fixHost
	I0826 12:10:11.386258  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.389263  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389579  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.389606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389838  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.390034  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390172  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390287  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.390479  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:11.390666  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:11.390678  153366 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:11.491363  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674211.462689704
	
	I0826 12:10:11.491389  153366 fix.go:216] guest clock: 1724674211.462689704
	I0826 12:10:11.491401  153366 fix.go:229] Guest: 2024-08-26 12:10:11.462689704 +0000 UTC Remote: 2024-08-26 12:10:11.386238136 +0000 UTC m=+195.618286719 (delta=76.451568ms)
	I0826 12:10:11.491428  153366 fix.go:200] guest clock delta is within tolerance: 76.451568ms
	I0826 12:10:11.491433  153366 start.go:83] releasing machines lock for "default-k8s-diff-port-697869", held for 19.303413047s
	I0826 12:10:11.491459  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.491760  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:11.494596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495094  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.495124  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495330  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.495889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496208  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496333  153366 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:11.496390  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.496433  153366 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:11.496456  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.499087  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499442  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499469  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499705  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499728  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499751  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.499964  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500007  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.500134  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500164  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500359  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500349  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.500509  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.612518  153366 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:11.618693  153366 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:11.766025  153366 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:11.772405  153366 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:11.772476  153366 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:11.790401  153366 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:11.790433  153366 start.go:495] detecting cgroup driver to use...
	I0826 12:10:11.790505  153366 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:11.806946  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:11.822137  153366 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:11.822199  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:11.836496  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:11.851090  153366 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:11.963366  153366 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:12.113326  153366 docker.go:233] disabling docker service ...
	I0826 12:10:12.113402  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:12.131489  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:12.148801  153366 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:12.293074  153366 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:12.420202  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:12.435061  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:12.455192  153366 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:12.455268  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.467004  153366 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:12.467079  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.477903  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.488979  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.500322  153366 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:12.513490  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.525746  153366 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.544944  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.556159  153366 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:12.566333  153366 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:12.566420  153366 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:12.584702  153366 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:12.596221  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:12.740368  153366 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:12.882412  153366 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:12.882501  153366 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:12.888373  153366 start.go:563] Will wait 60s for crictl version
	I0826 12:10:12.888446  153366 ssh_runner.go:195] Run: which crictl
	I0826 12:10:12.892415  153366 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:12.930486  153366 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:12.930577  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.959322  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.997340  153366 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:11.516801  152463 main.go:141] libmachine: (no-preload-956479) Calling .Start
	I0826 12:10:11.517026  152463 main.go:141] libmachine: (no-preload-956479) Ensuring networks are active...
	I0826 12:10:11.517932  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network default is active
	I0826 12:10:11.518378  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network mk-no-preload-956479 is active
	I0826 12:10:11.518950  152463 main.go:141] libmachine: (no-preload-956479) Getting domain xml...
	I0826 12:10:11.519889  152463 main.go:141] libmachine: (no-preload-956479) Creating domain...
	I0826 12:10:12.859267  152463 main.go:141] libmachine: (no-preload-956479) Waiting to get IP...
	I0826 12:10:12.860407  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:12.860889  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:12.860933  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:12.860840  154342 retry.go:31] will retry after 295.429691ms: waiting for machine to come up
	I0826 12:10:13.158650  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.159259  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.159290  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.159207  154342 retry.go:31] will retry after 385.646499ms: waiting for machine to come up
	I0826 12:10:13.547162  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.547722  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.547754  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.547631  154342 retry.go:31] will retry after 390.965905ms: waiting for machine to come up
	I0826 12:10:13.940240  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.940777  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.940820  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.940714  154342 retry.go:31] will retry after 457.995779ms: waiting for machine to come up
	I0826 12:10:14.400465  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:14.400981  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:14.401016  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:14.400917  154342 retry.go:31] will retry after 697.078299ms: waiting for machine to come up
	I0826 12:10:12.998786  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:13.001919  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002340  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:13.002376  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002627  153366 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:13.007888  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:13.023470  153366 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:13.023599  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:13.023666  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:13.060321  153366 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:13.060405  153366 ssh_runner.go:195] Run: which lz4
	I0826 12:10:13.064638  153366 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:10:13.069089  153366 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:10:13.069126  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:10:14.437617  153366 crio.go:462] duration metric: took 1.373030307s to copy over tarball
	I0826 12:10:14.437710  153366 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:10:12.129652  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:12.630076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.129342  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.630081  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.130129  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.629381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.129909  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.630114  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.129784  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.629463  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.704867  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:16.204819  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:15.099404  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:15.100002  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:15.100035  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:15.099956  154342 retry.go:31] will retry after 947.348263ms: waiting for machine to come up
	I0826 12:10:16.048628  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:16.049166  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:16.049185  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:16.049113  154342 retry.go:31] will retry after 1.169467339s: waiting for machine to come up
	I0826 12:10:17.219998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:17.220564  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:17.220589  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:17.220541  154342 retry.go:31] will retry after 945.873541ms: waiting for machine to come up
	I0826 12:10:18.167823  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:18.168351  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:18.168377  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:18.168272  154342 retry.go:31] will retry after 1.495556294s: waiting for machine to come up
	I0826 12:10:19.666032  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:19.666629  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:19.666656  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:19.666551  154342 retry.go:31] will retry after 1.710448725s: waiting for machine to come up
	I0826 12:10:16.739676  153366 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301910814s)
	I0826 12:10:16.739718  153366 crio.go:469] duration metric: took 2.302064986s to extract the tarball
	I0826 12:10:16.739729  153366 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:10:16.777127  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:16.820340  153366 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:10:16.820367  153366 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:10:16.820376  153366 kubeadm.go:934] updating node { 192.168.61.11 8444 v1.31.0 crio true true} ...
	I0826 12:10:16.820500  153366 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-697869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:16.820619  153366 ssh_runner.go:195] Run: crio config
	I0826 12:10:16.868670  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:16.868694  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:16.868708  153366 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:16.868738  153366 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-697869 NodeName:default-k8s-diff-port-697869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:16.868915  153366 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-697869"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:16.869010  153366 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:16.883092  153366 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:16.883230  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:16.893951  153366 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0826 12:10:16.911836  153366 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:16.928582  153366 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0826 12:10:16.945593  153366 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:16.949432  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:16.961659  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:17.085246  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:17.103244  153366 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869 for IP: 192.168.61.11
	I0826 12:10:17.103271  153366 certs.go:194] generating shared ca certs ...
	I0826 12:10:17.103302  153366 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:17.103510  153366 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:17.103575  153366 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:17.103585  153366 certs.go:256] generating profile certs ...
	I0826 12:10:17.103700  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/client.key
	I0826 12:10:17.103787  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key.bfd30dfa
	I0826 12:10:17.103839  153366 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key
	I0826 12:10:17.103989  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:17.104033  153366 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:17.104045  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:17.104088  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:17.104138  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:17.104169  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:17.104226  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:17.105131  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:17.133445  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:17.170369  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:17.203828  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:17.239736  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0826 12:10:17.270804  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:10:17.311143  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:17.337241  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:10:17.361255  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:17.389089  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:17.415203  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:17.440069  153366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:17.457711  153366 ssh_runner.go:195] Run: openssl version
	I0826 12:10:17.463825  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:17.475007  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479590  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479674  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.485682  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:17.496820  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:17.507770  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512284  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512360  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.518185  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:17.530028  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:17.541213  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546412  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546492  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.552969  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:17.565000  153366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:17.570123  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:17.576431  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:17.582447  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:17.588686  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:17.595338  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:17.601487  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:17.607923  153366 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:17.608035  153366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:17.608125  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.647040  153366 cri.go:89] found id: ""
	I0826 12:10:17.647140  153366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:17.657597  153366 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:17.657623  153366 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:17.657696  153366 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:17.667949  153366 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:17.669056  153366 kubeconfig.go:125] found "default-k8s-diff-port-697869" server: "https://192.168.61.11:8444"
	I0826 12:10:17.671281  153366 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:17.680798  153366 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I0826 12:10:17.680847  153366 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:17.680862  153366 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:17.680921  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.718772  153366 cri.go:89] found id: ""
	I0826 12:10:17.718890  153366 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:17.737115  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:17.747272  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:17.747300  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:17.747365  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:10:17.757172  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:17.757253  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:17.767325  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:10:17.779947  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:17.780022  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:17.789867  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.799532  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:17.799614  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.812714  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:10:17.825162  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:17.825246  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:17.838058  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:17.855348  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:17.976993  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:18.821196  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.025876  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.104571  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.198607  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:19.198729  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.698978  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.198987  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.246044  153366 api_server.go:72] duration metric: took 1.047451922s to wait for apiserver process to appear ...
	I0826 12:10:20.246077  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:20.246102  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:20.246682  153366 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0826 12:10:20.747149  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:17.129856  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:17.629845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.129411  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.629780  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.129980  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.629521  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.129719  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.630349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.130078  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.629658  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.704382  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:20.705290  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:22.705625  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:21.379594  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:21.380141  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:21.380174  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:21.380054  154342 retry.go:31] will retry after 2.588125482s: waiting for machine to come up
	I0826 12:10:23.969901  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:23.970463  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:23.970492  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:23.970429  154342 retry.go:31] will retry after 2.959609618s: waiting for machine to come up
	I0826 12:10:22.736733  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.736773  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.736792  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.767927  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.767978  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.767998  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.815605  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:22.815647  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.247226  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.265036  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.265070  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.746536  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.761050  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.761087  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.246584  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.256796  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.256832  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.746370  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.751618  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.751659  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.246161  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.250242  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.250271  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.746903  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.751494  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.751522  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:26.246579  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:26.251290  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:10:26.257484  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:26.257519  153366 api_server.go:131] duration metric: took 6.01143401s to wait for apiserver health ...
	I0826 12:10:26.257529  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:26.257536  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:26.259498  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:22.130431  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:22.630197  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.129672  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.630044  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.129562  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.629554  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.129334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.630351  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.130136  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.629461  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.203975  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.704731  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:26.932057  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:26.932632  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:26.932665  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:26.932547  154342 retry.go:31] will retry after 3.538498107s: waiting for machine to come up
	I0826 12:10:26.260852  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:26.271312  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:26.290104  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:26.299800  153366 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:26.299843  153366 system_pods.go:61] "coredns-6f6b679f8f-d5f9l" [7761358c-70cb-40e1-98c2-322335e33053] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:26.299852  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [877bd1a3-67e5-4208-96f7-242f6a6edd76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:26.299858  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [e2d33714-bff0-480b-9619-ed28f0fbbbe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:26.299868  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [f858c23a-d87e-4f1e-bffa-0bdd8ded996f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:26.299872  153366 system_pods.go:61] "kube-proxy-lvsx9" [12112756-81ed-415f-9033-cb9effdd20ee] Running
	I0826 12:10:26.299880  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [d8991013-f5ee-4df3-b48a-d6546417999a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:26.299885  153366 system_pods.go:61] "metrics-server-6867b74b74-spxx8" [1d5d9b1e-05f3-4b59-98a8-8d8f127be3c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:26.299889  153366 system_pods.go:61] "storage-provisioner" [ac2ac441-92f0-467a-a0da-fe4b8e4da50c] Running
	I0826 12:10:26.299896  153366 system_pods.go:74] duration metric: took 9.758032ms to wait for pod list to return data ...
	I0826 12:10:26.299903  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:26.303810  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:26.303848  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:26.303865  153366 node_conditions.go:105] duration metric: took 3.956287ms to run NodePressure ...
	I0826 12:10:26.303888  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:26.568053  153366 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573755  153366 kubeadm.go:739] kubelet initialised
	I0826 12:10:26.573793  153366 kubeadm.go:740] duration metric: took 5.692563ms waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573810  153366 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:26.580178  153366 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:28.585940  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.587027  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.129634  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:27.629356  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.130029  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.629993  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.130030  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.629424  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.129476  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.630209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.129435  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.630170  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.203373  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:32.204503  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.474603  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475145  152463 main.go:141] libmachine: (no-preload-956479) Found IP for machine: 192.168.50.213
	I0826 12:10:30.475172  152463 main.go:141] libmachine: (no-preload-956479) Reserving static IP address...
	I0826 12:10:30.475184  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has current primary IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475655  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.475688  152463 main.go:141] libmachine: (no-preload-956479) DBG | skip adding static IP to network mk-no-preload-956479 - found existing host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"}
	I0826 12:10:30.475705  152463 main.go:141] libmachine: (no-preload-956479) Reserved static IP address: 192.168.50.213
	I0826 12:10:30.475724  152463 main.go:141] libmachine: (no-preload-956479) Waiting for SSH to be available...
	I0826 12:10:30.475749  152463 main.go:141] libmachine: (no-preload-956479) DBG | Getting to WaitForSSH function...
	I0826 12:10:30.477762  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478222  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.478256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478323  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH client type: external
	I0826 12:10:30.478352  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa (-rw-------)
	I0826 12:10:30.478400  152463 main.go:141] libmachine: (no-preload-956479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:30.478423  152463 main.go:141] libmachine: (no-preload-956479) DBG | About to run SSH command:
	I0826 12:10:30.478431  152463 main.go:141] libmachine: (no-preload-956479) DBG | exit 0
	I0826 12:10:30.607143  152463 main.go:141] libmachine: (no-preload-956479) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:30.607526  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetConfigRaw
	I0826 12:10:30.608312  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.611028  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611425  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.611461  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611664  152463 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:10:30.611888  152463 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:30.611920  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:30.612166  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.614651  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615221  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.615253  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615430  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.615623  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615802  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615987  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.616182  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.616357  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.616367  152463 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:30.719178  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:30.719220  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719544  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:10:30.719577  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719829  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.722665  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723083  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.723112  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723299  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.723479  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723805  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.723965  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.724136  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.724154  152463 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956479 && echo "no-preload-956479" | sudo tee /etc/hostname
	I0826 12:10:30.844510  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956479
	
	I0826 12:10:30.844551  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.848147  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848601  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.848636  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848846  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.849053  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849234  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849371  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.849537  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.849711  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.849726  152463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956479/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:30.963743  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:30.963781  152463 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:30.963831  152463 buildroot.go:174] setting up certificates
	I0826 12:10:30.963844  152463 provision.go:84] configureAuth start
	I0826 12:10:30.963858  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.964223  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.967426  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.967922  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.967947  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.968210  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.970910  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971231  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.971268  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971381  152463 provision.go:143] copyHostCerts
	I0826 12:10:30.971439  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:30.971462  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:30.971515  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:30.971610  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:30.971620  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:30.971641  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:30.971695  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:30.971708  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:30.971726  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:30.971773  152463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.no-preload-956479 san=[127.0.0.1 192.168.50.213 localhost minikube no-preload-956479]
	I0826 12:10:31.209813  152463 provision.go:177] copyRemoteCerts
	I0826 12:10:31.209904  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:31.209939  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.213380  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.213880  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.213921  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.214161  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.214387  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.214543  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.214669  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.304972  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:31.332069  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:10:31.359526  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:31.387988  152463 provision.go:87] duration metric: took 424.128041ms to configureAuth
	I0826 12:10:31.388025  152463 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:31.388248  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:31.388342  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.392126  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392495  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.392527  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.393069  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393276  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393443  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.393636  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.393812  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.393830  152463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:31.673101  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:31.673134  152463 machine.go:96] duration metric: took 1.061231135s to provisionDockerMachine
	I0826 12:10:31.673147  152463 start.go:293] postStartSetup for "no-preload-956479" (driver="kvm2")
	I0826 12:10:31.673157  152463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:31.673173  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.673523  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:31.673556  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.676692  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677097  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.677142  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677349  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.677558  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.677702  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.677822  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.757940  152463 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:31.762636  152463 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:31.762668  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:31.762755  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:31.762887  152463 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:31.763005  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:31.773596  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:31.805712  152463 start.go:296] duration metric: took 132.547938ms for postStartSetup
	I0826 12:10:31.805772  152463 fix.go:56] duration metric: took 20.314170869s for fixHost
	I0826 12:10:31.805799  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.809143  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809503  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.809539  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.810034  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810552  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.810714  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.810950  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.810964  152463 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:31.919562  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674231.878777554
	
	I0826 12:10:31.919593  152463 fix.go:216] guest clock: 1724674231.878777554
	I0826 12:10:31.919605  152463 fix.go:229] Guest: 2024-08-26 12:10:31.878777554 +0000 UTC Remote: 2024-08-26 12:10:31.805776925 +0000 UTC m=+357.093278934 (delta=73.000629ms)
	I0826 12:10:31.919635  152463 fix.go:200] guest clock delta is within tolerance: 73.000629ms
	I0826 12:10:31.919653  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 20.428086051s
	I0826 12:10:31.919683  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.919994  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:31.922926  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923273  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.923305  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923492  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924019  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924217  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924314  152463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:31.924361  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.924462  152463 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:31.924485  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.927256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927510  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927697  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927724  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927869  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.927977  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.928076  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928245  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.928265  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928507  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.928547  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928816  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:32.013240  152463 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:32.047898  152463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:32.200554  152463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:32.207077  152463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:32.207149  152463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:32.223842  152463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:32.223869  152463 start.go:495] detecting cgroup driver to use...
	I0826 12:10:32.223931  152463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:32.241232  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:32.256522  152463 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:32.256594  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:32.271203  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:32.286062  152463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:32.422959  152463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:32.596450  152463 docker.go:233] disabling docker service ...
	I0826 12:10:32.596518  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:32.610684  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:32.624456  152463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:32.754300  152463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:32.880447  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:32.895761  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:32.915507  152463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:32.915579  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.926244  152463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:32.926323  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.936322  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.947292  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.958349  152463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:32.969332  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.981643  152463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.003757  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.014520  152463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:33.024134  152463 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:33.024220  152463 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:33.036667  152463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:33.046675  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:33.166681  152463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:33.314047  152463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:33.314136  152463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:33.319922  152463 start.go:563] Will wait 60s for crictl version
	I0826 12:10:33.320002  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.323747  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:33.363172  152463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:33.363268  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.391607  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.422180  152463 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:33.423515  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:33.426749  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427279  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:33.427316  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427559  152463 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:33.431826  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:33.443984  152463 kubeadm.go:883] updating cluster {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:33.444119  152463 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:33.444160  152463 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:33.478886  152463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:33.478919  152463 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:10:33.478977  152463 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.478997  152463 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.479029  152463 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.479079  152463 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 12:10:33.479002  152463 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.479095  152463 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.479153  152463 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.479157  152463 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480618  152463 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.480616  152463 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.480650  152463 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.480654  152463 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480623  152463 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.480628  152463 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.480629  152463 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.480763  152463 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0826 12:10:33.713473  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0826 12:10:33.725267  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.737490  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.787737  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.801836  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.807734  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.873480  152463 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0826 12:10:33.873546  152463 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.873617  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.873493  152463 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0826 12:10:33.873741  152463 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.873772  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.889641  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.921098  152463 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0826 12:10:33.921226  152463 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.921326  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.921170  152463 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0826 12:10:33.921463  152463 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.921499  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.930650  152463 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0826 12:10:33.930702  152463 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.930720  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.930738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.930743  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.973954  152463 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0826 12:10:33.974005  152463 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.974042  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.974059  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.974053  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:34.013541  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.013571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.013542  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.053966  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.053985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.068414  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.116750  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.116778  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.164943  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.172957  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.204571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.230985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.236650  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.270826  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0826 12:10:34.270990  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.304050  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0826 12:10:34.304147  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:34.308251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0826 12:10:34.308374  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:34.335314  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.348389  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.351251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0826 12:10:34.351376  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:34.359812  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0826 12:10:34.359842  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0826 12:10:34.359863  152463 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.359891  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0826 12:10:34.359921  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0826 12:10:34.359948  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:34.359952  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.400500  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0826 12:10:34.400644  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:34.428715  152463 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0826 12:10:34.428758  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0826 12:10:34.428776  152463 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.428802  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0826 12:10:34.428855  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:31.586509  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:31.586539  153366 pod_ready.go:82] duration metric: took 5.006322441s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:31.586549  153366 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:33.593060  153366 pod_ready.go:103] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:34.092728  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:34.092762  153366 pod_ready.go:82] duration metric: took 2.506204888s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:34.092775  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:32.130190  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:32.630331  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.129323  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.629368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.129667  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.629421  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.130330  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.630142  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.130340  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.629400  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.205203  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.704302  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.449383  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.089320181s)
	I0826 12:10:36.449436  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0826 12:10:36.449447  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.048765538s)
	I0826 12:10:36.449467  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449481  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0826 12:10:36.449509  152463 ssh_runner.go:235] Completed: which crictl: (2.020634497s)
	I0826 12:10:36.449536  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449568  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.427527  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.977941403s)
	I0826 12:10:38.427585  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0826 12:10:38.427610  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427529  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.977935335s)
	I0826 12:10:38.427668  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.466259  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:36.100135  153366 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.100269  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.100296  153366 pod_ready.go:82] duration metric: took 3.007513255s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.100308  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105634  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.105658  153366 pod_ready.go:82] duration metric: took 5.341415ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105668  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110911  153366 pod_ready.go:93] pod "kube-proxy-lvsx9" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.110939  153366 pod_ready.go:82] duration metric: took 5.263436ms for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110950  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115725  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.115752  153366 pod_ready.go:82] duration metric: took 4.79279ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115765  153366 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:39.122469  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.130309  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:37.629548  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.129413  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.629384  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.130354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.629474  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.129901  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.629362  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.129862  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.629811  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.704541  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.704598  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.705026  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.616557  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.188857601s)
	I0826 12:10:40.616588  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0826 12:10:40.616614  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616634  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.150337121s)
	I0826 12:10:40.616669  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616769  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0826 12:10:40.616885  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:42.472543  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.855842642s)
	I0826 12:10:42.472583  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0826 12:10:42.472586  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.855677168s)
	I0826 12:10:42.472620  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0826 12:10:42.472625  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:42.472702  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:44.419974  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.947236189s)
	I0826 12:10:44.420011  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0826 12:10:44.420041  152463 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:44.420097  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:41.122741  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:43.123416  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:45.623931  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.130334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:42.630068  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.130212  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.629443  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.130067  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.629805  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.129753  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.629806  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.129401  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.630125  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.203266  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.205125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:48.038017  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.617897174s)
	I0826 12:10:48.038048  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0826 12:10:48.038073  152463 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.038114  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.693199  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0826 12:10:48.693251  152463 cache_images.go:123] Successfully loaded all cached images
	I0826 12:10:48.693259  152463 cache_images.go:92] duration metric: took 15.214324574s to LoadCachedImages
	I0826 12:10:48.693274  152463 kubeadm.go:934] updating node { 192.168.50.213 8443 v1.31.0 crio true true} ...
	I0826 12:10:48.693392  152463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:48.693481  152463 ssh_runner.go:195] Run: crio config
	I0826 12:10:48.748151  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:48.748176  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:48.748185  152463 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:48.748210  152463 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956479 NodeName:no-preload-956479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:48.748387  152463 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956479"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:48.748458  152463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:48.759020  152463 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:48.759097  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:48.768345  152463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0826 12:10:48.784233  152463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:48.800236  152463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0826 12:10:48.819243  152463 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:48.823154  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:48.835973  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:48.959506  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:48.977413  152463 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479 for IP: 192.168.50.213
	I0826 12:10:48.977437  152463 certs.go:194] generating shared ca certs ...
	I0826 12:10:48.977458  152463 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:48.977653  152463 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:48.977714  152463 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:48.977725  152463 certs.go:256] generating profile certs ...
	I0826 12:10:48.977827  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.key
	I0826 12:10:48.977903  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key.5be91d7c
	I0826 12:10:48.977952  152463 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key
	I0826 12:10:48.978094  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:48.978136  152463 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:48.978149  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:48.978183  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:48.978221  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:48.978252  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:48.978305  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:48.978975  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:49.029725  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:49.077908  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:49.112813  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:49.157768  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 12:10:49.201804  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:49.228271  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:49.256770  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:49.283073  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:49.316360  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:49.342284  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:49.368126  152463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:49.386334  152463 ssh_runner.go:195] Run: openssl version
	I0826 12:10:49.392457  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:49.404815  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410087  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410160  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.416900  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:49.429893  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:49.442796  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448216  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448291  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.454416  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:49.466241  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:49.477636  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482106  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482193  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.488191  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:49.499538  152463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:49.504332  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:49.510908  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:49.517549  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:49.524925  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:49.531451  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:49.537617  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:49.543680  152463 kubeadm.go:392] StartCluster: {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:49.543776  152463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:49.543843  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.587049  152463 cri.go:89] found id: ""
	I0826 12:10:49.587142  152463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:49.597911  152463 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:49.597936  152463 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:49.598001  152463 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:49.607974  152463 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:49.608976  152463 kubeconfig.go:125] found "no-preload-956479" server: "https://192.168.50.213:8443"
	I0826 12:10:49.611217  152463 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:49.622647  152463 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I0826 12:10:49.622689  152463 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:49.622706  152463 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:49.623002  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.662463  152463 cri.go:89] found id: ""
	I0826 12:10:49.662549  152463 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:49.681134  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:49.691425  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:49.691452  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:49.691512  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:49.701389  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:49.701474  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:49.713195  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:49.722708  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:49.722792  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:49.732905  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:49.742726  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:49.742814  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:48.123021  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:50.123270  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.129441  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:47.629637  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.129381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.630027  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.129789  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.630022  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.130252  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.630145  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.129544  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.629646  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.704947  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:51.705172  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:49.752415  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:49.761573  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:49.761667  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:49.771209  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:49.781057  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:49.889287  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.424782  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.640186  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.713706  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.834409  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:50.834516  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.335630  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.834665  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.851569  152463 api_server.go:72] duration metric: took 1.01717469s to wait for apiserver process to appear ...
	I0826 12:10:51.851601  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:51.851626  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:51.852167  152463 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0826 12:10:52.351709  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.441177  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.441210  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:54.441223  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.451907  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.451937  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:52.623200  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.122552  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:54.852737  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.857641  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:54.857740  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.351825  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.356325  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:55.356364  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.851867  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.858081  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:10:55.865811  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:55.865843  152463 api_server.go:131] duration metric: took 4.014234103s to wait for apiserver health ...
	I0826 12:10:55.865853  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:55.865861  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:55.867818  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:52.129473  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:52.629868  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.129585  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.629893  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.129446  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.629722  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.130173  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.629968  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.129994  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.629422  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.203474  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:56.204271  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.869434  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:55.881376  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:55.935418  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:55.955678  152463 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:55.955721  152463 system_pods.go:61] "coredns-6f6b679f8f-s9685" [b6fca294-8a78-4f7c-a466-11c76362874a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:55.955732  152463 system_pods.go:61] "etcd-no-preload-956479" [96da9402-8ea6-4418-892d-7691ab60a10d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:55.955744  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [5fe3eb03-a50c-4a7b-8c50-37262f1b165f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:55.955752  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [362950c9-4466-413e-8248-053fe4d698a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:55.955759  152463 system_pods.go:61] "kube-proxy-kwpqw" [023fc9f9-538e-43d0-a484-e2f4c75c7f34] Running
	I0826 12:10:55.955769  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [d24580b2-8a37-4aaa-8d9d-66f9eb3e0c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:55.955777  152463 system_pods.go:61] "metrics-server-6867b74b74-ldgsl" [264e96c8-430f-40fc-bb9c-7588cc28bc6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:55.955787  152463 system_pods.go:61] "storage-provisioner" [de97d99d-eda7-4ae4-8051-2fc34a2fe630] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:10:55.955803  152463 system_pods.go:74] duration metric: took 20.359455ms to wait for pod list to return data ...
	I0826 12:10:55.955815  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:55.972694  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:55.972741  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:55.972756  152463 node_conditions.go:105] duration metric: took 16.934705ms to run NodePressure ...
	I0826 12:10:55.972781  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:56.283383  152463 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288149  152463 kubeadm.go:739] kubelet initialised
	I0826 12:10:56.288173  152463 kubeadm.go:740] duration metric: took 4.75919ms waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288183  152463 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:56.292852  152463 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.297832  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297858  152463 pod_ready.go:82] duration metric: took 4.980322ms for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.297868  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297876  152463 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.302936  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302971  152463 pod_ready.go:82] duration metric: took 5.08663ms for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.302987  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302995  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.313684  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313719  152463 pod_ready.go:82] duration metric: took 10.716576ms for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.313733  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313742  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.339570  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339604  152463 pod_ready.go:82] duration metric: took 25.849085ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.339613  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339620  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738759  152463 pod_ready.go:93] pod "kube-proxy-kwpqw" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:56.738786  152463 pod_ready.go:82] duration metric: took 399.156996ms for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738798  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:58.745103  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.623412  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.123226  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.129363  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:57.629878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.129406  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.629611  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.130209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.629354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.130004  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.629599  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.129324  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.629623  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.705336  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:01.206112  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.746646  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.748453  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.623495  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:04.623650  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.129756  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:02.630078  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:02.630168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:02.668634  152982 cri.go:89] found id: ""
	I0826 12:11:02.668665  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.668673  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:02.668680  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:02.668736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:02.707481  152982 cri.go:89] found id: ""
	I0826 12:11:02.707513  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.707524  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:02.707531  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:02.707600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:02.742412  152982 cri.go:89] found id: ""
	I0826 12:11:02.742441  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.742452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:02.742459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:02.742524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:02.783334  152982 cri.go:89] found id: ""
	I0826 12:11:02.783363  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.783374  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:02.783383  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:02.783442  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:02.819550  152982 cri.go:89] found id: ""
	I0826 12:11:02.819578  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.819586  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:02.819592  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:02.819668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:02.857381  152982 cri.go:89] found id: ""
	I0826 12:11:02.857418  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.857429  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:02.857439  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:02.857508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:02.891198  152982 cri.go:89] found id: ""
	I0826 12:11:02.891231  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.891242  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:02.891249  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:02.891328  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:02.925819  152982 cri.go:89] found id: ""
	I0826 12:11:02.925847  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.925856  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:02.925867  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:02.925881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:03.061241  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:03.061287  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:03.061306  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:03.132324  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:03.132364  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:03.176590  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:03.176623  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.229320  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:03.229366  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:05.744686  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:05.758429  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:05.758517  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:05.799162  152982 cri.go:89] found id: ""
	I0826 12:11:05.799200  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.799209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:05.799216  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:05.799270  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:05.839302  152982 cri.go:89] found id: ""
	I0826 12:11:05.839341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.839354  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:05.839362  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:05.839438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:05.900064  152982 cri.go:89] found id: ""
	I0826 12:11:05.900094  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.900102  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:05.900108  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:05.900168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:05.938314  152982 cri.go:89] found id: ""
	I0826 12:11:05.938341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.938350  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:05.938356  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:05.938423  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:05.975644  152982 cri.go:89] found id: ""
	I0826 12:11:05.975679  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.975691  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:05.975699  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:05.975775  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:06.012720  152982 cri.go:89] found id: ""
	I0826 12:11:06.012752  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.012764  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:06.012772  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:06.012848  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:06.048613  152982 cri.go:89] found id: ""
	I0826 12:11:06.048648  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.048656  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:06.048662  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:06.048717  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:06.083136  152982 cri.go:89] found id: ""
	I0826 12:11:06.083171  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.083183  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:06.083195  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:06.083213  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:06.096570  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:06.096616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:06.172561  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:06.172588  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:06.172605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:06.252039  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:06.252081  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:06.291076  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:06.291109  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.705538  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:06.203800  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:05.245839  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.744844  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.745230  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.123518  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.124421  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:08.838693  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:08.853160  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:08.853246  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:08.893024  152982 cri.go:89] found id: ""
	I0826 12:11:08.893058  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.893072  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:08.893083  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:08.893157  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:08.929621  152982 cri.go:89] found id: ""
	I0826 12:11:08.929660  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.929669  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:08.929675  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:08.929744  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:08.965488  152982 cri.go:89] found id: ""
	I0826 12:11:08.965526  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.965541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:08.965550  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:08.965622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:09.001467  152982 cri.go:89] found id: ""
	I0826 12:11:09.001503  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.001515  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:09.001525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:09.001587  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:09.037865  152982 cri.go:89] found id: ""
	I0826 12:11:09.037898  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.037907  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:09.037914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:09.037973  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:09.074537  152982 cri.go:89] found id: ""
	I0826 12:11:09.074571  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.074582  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:09.074591  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:09.074665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:09.111517  152982 cri.go:89] found id: ""
	I0826 12:11:09.111550  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.111561  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:09.111569  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:09.111635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:09.151005  152982 cri.go:89] found id: ""
	I0826 12:11:09.151039  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.151050  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:09.151062  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:09.151079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:09.231625  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:09.231666  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:09.277642  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:09.277685  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:09.326772  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:09.326814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:09.341764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:09.341802  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:09.419087  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:08.203869  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.206872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:12.703516  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.246459  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:11:10.246503  152463 pod_ready.go:82] duration metric: took 13.507695458s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:10.246520  152463 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:12.254439  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:14.752278  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.126604  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:13.622382  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:15.622915  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.920246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:11.933973  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:11.934070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:11.971020  152982 cri.go:89] found id: ""
	I0826 12:11:11.971055  152982 logs.go:276] 0 containers: []
	W0826 12:11:11.971067  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:11.971076  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:11.971147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:12.005639  152982 cri.go:89] found id: ""
	I0826 12:11:12.005669  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.005679  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:12.005687  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:12.005757  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:12.039823  152982 cri.go:89] found id: ""
	I0826 12:11:12.039856  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.039868  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:12.039877  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:12.039954  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:12.075646  152982 cri.go:89] found id: ""
	I0826 12:11:12.075690  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.075702  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:12.075710  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:12.075814  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:12.113810  152982 cri.go:89] found id: ""
	I0826 12:11:12.113838  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.113846  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:12.113852  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:12.113927  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:12.150373  152982 cri.go:89] found id: ""
	I0826 12:11:12.150405  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.150415  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:12.150421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:12.150478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:12.186325  152982 cri.go:89] found id: ""
	I0826 12:11:12.186362  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.186373  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:12.186381  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:12.186444  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:12.221346  152982 cri.go:89] found id: ""
	I0826 12:11:12.221380  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.221392  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:12.221405  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:12.221423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:12.279964  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:12.280006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:12.297102  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:12.297134  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:12.391568  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:12.391593  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:12.391608  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:12.472218  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:12.472259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.012974  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:15.026480  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:15.026553  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:15.060748  152982 cri.go:89] found id: ""
	I0826 12:11:15.060779  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.060787  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:15.060792  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:15.060842  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:15.095611  152982 cri.go:89] found id: ""
	I0826 12:11:15.095644  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.095668  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:15.095683  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:15.095759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:15.130644  152982 cri.go:89] found id: ""
	I0826 12:11:15.130681  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.130692  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:15.130700  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:15.130773  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:15.164343  152982 cri.go:89] found id: ""
	I0826 12:11:15.164375  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.164383  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:15.164391  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:15.164468  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:15.203801  152982 cri.go:89] found id: ""
	I0826 12:11:15.203835  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.203847  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:15.203855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:15.203935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:15.236428  152982 cri.go:89] found id: ""
	I0826 12:11:15.236455  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.236465  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:15.236474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:15.236546  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:15.271307  152982 cri.go:89] found id: ""
	I0826 12:11:15.271345  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.271357  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:15.271365  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:15.271449  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:15.306164  152982 cri.go:89] found id: ""
	I0826 12:11:15.306194  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.306203  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:15.306214  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:15.306228  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:15.319277  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:15.319311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:15.389821  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:15.389853  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:15.389874  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:15.466002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:15.466045  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.506591  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:15.506626  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:14.703938  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.704084  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.753630  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:19.252388  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.124351  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:20.621827  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.061033  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:18.084401  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:18.084478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:18.127327  152982 cri.go:89] found id: ""
	I0826 12:11:18.127360  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.127371  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:18.127380  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:18.127451  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:18.163215  152982 cri.go:89] found id: ""
	I0826 12:11:18.163249  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.163261  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:18.163270  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:18.163330  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:18.198205  152982 cri.go:89] found id: ""
	I0826 12:11:18.198232  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.198241  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:18.198250  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:18.198322  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:18.233245  152982 cri.go:89] found id: ""
	I0826 12:11:18.233279  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.233291  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:18.233299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:18.233366  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:18.266761  152982 cri.go:89] found id: ""
	I0826 12:11:18.266802  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.266825  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:18.266855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:18.266932  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:18.301705  152982 cri.go:89] found id: ""
	I0826 12:11:18.301744  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.301755  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:18.301764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:18.301825  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:18.339384  152982 cri.go:89] found id: ""
	I0826 12:11:18.339413  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.339422  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:18.339428  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:18.339486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:18.374435  152982 cri.go:89] found id: ""
	I0826 12:11:18.374467  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.374475  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:18.374485  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:18.374498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:18.414453  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:18.414506  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:18.468667  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:18.468712  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:18.483366  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:18.483399  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:18.554900  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:18.554930  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:18.554948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.135828  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:21.148610  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:21.148690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:21.184455  152982 cri.go:89] found id: ""
	I0826 12:11:21.184484  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.184494  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:21.184503  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:21.184572  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:21.219762  152982 cri.go:89] found id: ""
	I0826 12:11:21.219808  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.219821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:21.219829  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:21.219914  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:21.258106  152982 cri.go:89] found id: ""
	I0826 12:11:21.258136  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.258147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:21.258154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:21.258221  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:21.293698  152982 cri.go:89] found id: ""
	I0826 12:11:21.293741  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.293753  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:21.293764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:21.293841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:21.328069  152982 cri.go:89] found id: ""
	I0826 12:11:21.328101  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.328115  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:21.328123  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:21.328191  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:21.363723  152982 cri.go:89] found id: ""
	I0826 12:11:21.363757  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.363767  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:21.363776  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:21.363843  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:21.398321  152982 cri.go:89] found id: ""
	I0826 12:11:21.398349  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.398358  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:21.398364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:21.398428  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:21.434139  152982 cri.go:89] found id: ""
	I0826 12:11:21.434169  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.434177  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:21.434189  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:21.434211  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:21.488855  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:21.488900  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:21.503146  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:21.503186  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:21.576190  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:21.576212  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:21.576226  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.660280  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:21.660330  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:19.203558  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.704020  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.254119  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:23.752986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:22.622972  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.623227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.205285  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:24.219929  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:24.220056  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:24.263032  152982 cri.go:89] found id: ""
	I0826 12:11:24.263064  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.263076  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:24.263084  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:24.263154  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:24.301435  152982 cri.go:89] found id: ""
	I0826 12:11:24.301469  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.301479  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:24.301486  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:24.301557  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:24.337463  152982 cri.go:89] found id: ""
	I0826 12:11:24.337494  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.337505  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:24.337513  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:24.337589  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:24.375142  152982 cri.go:89] found id: ""
	I0826 12:11:24.375181  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.375192  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:24.375201  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:24.375277  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:24.414859  152982 cri.go:89] found id: ""
	I0826 12:11:24.414891  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.414902  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:24.414910  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:24.414980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:24.453757  152982 cri.go:89] found id: ""
	I0826 12:11:24.453801  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.453826  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:24.453836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:24.453936  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:24.489558  152982 cri.go:89] found id: ""
	I0826 12:11:24.489592  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.489601  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:24.489606  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:24.489659  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:24.525054  152982 cri.go:89] found id: ""
	I0826 12:11:24.525086  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.525097  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:24.525109  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:24.525131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:24.596120  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:24.596147  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:24.596162  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:24.671993  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:24.672040  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:24.714108  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:24.714139  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:24.764937  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:24.764979  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:23.704101  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.704765  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.759905  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:28.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.121723  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:29.122568  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.280105  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:27.293479  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:27.293569  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:27.335432  152982 cri.go:89] found id: ""
	I0826 12:11:27.335464  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.335477  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:27.335485  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:27.335565  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:27.371729  152982 cri.go:89] found id: ""
	I0826 12:11:27.371763  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.371774  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:27.371783  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:27.371857  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:27.408210  152982 cri.go:89] found id: ""
	I0826 12:11:27.408238  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.408250  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:27.408258  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:27.408327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:27.442135  152982 cri.go:89] found id: ""
	I0826 12:11:27.442170  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.442186  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:27.442196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:27.442266  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:27.476245  152982 cri.go:89] found id: ""
	I0826 12:11:27.476279  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.476290  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:27.476299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:27.476421  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:27.510917  152982 cri.go:89] found id: ""
	I0826 12:11:27.510949  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.510958  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:27.510965  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:27.511033  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:27.552891  152982 cri.go:89] found id: ""
	I0826 12:11:27.552925  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.552933  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:27.552939  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:27.552996  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:27.588303  152982 cri.go:89] found id: ""
	I0826 12:11:27.588339  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.588354  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:27.588365  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:27.588383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:27.666493  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:27.666540  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:27.710139  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:27.710176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:27.761327  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:27.761368  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:27.775628  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:27.775667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:27.851736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.351953  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:30.365614  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:30.365705  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:30.400100  152982 cri.go:89] found id: ""
	I0826 12:11:30.400130  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.400140  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:30.400148  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:30.400224  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:30.433714  152982 cri.go:89] found id: ""
	I0826 12:11:30.433746  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.433762  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:30.433770  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:30.433841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:30.467434  152982 cri.go:89] found id: ""
	I0826 12:11:30.467465  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.467475  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:30.467482  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:30.467549  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:30.501079  152982 cri.go:89] found id: ""
	I0826 12:11:30.501115  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.501128  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:30.501136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:30.501195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:30.536521  152982 cri.go:89] found id: ""
	I0826 12:11:30.536556  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.536568  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:30.536576  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:30.536649  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:30.572998  152982 cri.go:89] found id: ""
	I0826 12:11:30.573030  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.573040  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:30.573048  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:30.573116  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:30.608982  152982 cri.go:89] found id: ""
	I0826 12:11:30.609017  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.609028  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:30.609035  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:30.609110  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:30.648780  152982 cri.go:89] found id: ""
	I0826 12:11:30.648812  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.648824  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:30.648837  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:30.648853  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:30.705822  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:30.705859  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:30.719927  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:30.719956  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:30.799604  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.799633  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:30.799650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:30.876392  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:30.876438  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:28.203982  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.204105  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.703547  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.255268  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.753846  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:31.622470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.623169  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.417878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:33.431323  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:33.431416  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:33.466166  152982 cri.go:89] found id: ""
	I0826 12:11:33.466195  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.466204  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:33.466215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:33.466292  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:33.504322  152982 cri.go:89] found id: ""
	I0826 12:11:33.504351  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.504360  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:33.504367  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:33.504429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:33.542292  152982 cri.go:89] found id: ""
	I0826 12:11:33.542324  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.542332  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:33.542340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:33.542408  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:33.577794  152982 cri.go:89] found id: ""
	I0826 12:11:33.577827  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.577835  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:33.577841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:33.577901  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:33.611525  152982 cri.go:89] found id: ""
	I0826 12:11:33.611561  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.611571  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:33.611580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:33.611661  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:33.650920  152982 cri.go:89] found id: ""
	I0826 12:11:33.650954  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.650966  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:33.650974  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:33.651043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:33.688349  152982 cri.go:89] found id: ""
	I0826 12:11:33.688389  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.688401  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:33.688409  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:33.688479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:33.726501  152982 cri.go:89] found id: ""
	I0826 12:11:33.726533  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.726542  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:33.726553  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:33.726570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:33.740359  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:33.740392  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:33.810992  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:33.811018  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:33.811030  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:33.895742  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:33.895786  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:33.934059  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:33.934090  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.490917  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:36.503916  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:36.504000  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:36.539493  152982 cri.go:89] found id: ""
	I0826 12:11:36.539521  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.539529  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:36.539535  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:36.539597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:36.575517  152982 cri.go:89] found id: ""
	I0826 12:11:36.575556  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.575567  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:36.575576  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:36.575647  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:36.611750  152982 cri.go:89] found id: ""
	I0826 12:11:36.611790  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.611803  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:36.611812  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:36.611880  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:36.649512  152982 cri.go:89] found id: ""
	I0826 12:11:36.649548  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.649561  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:36.649575  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:36.649656  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:36.686741  152982 cri.go:89] found id: ""
	I0826 12:11:36.686774  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.686784  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:36.686791  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:36.686879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:35.204399  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:37.206473  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:34.753931  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.754270  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:39.253118  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.122628  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:38.122940  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:40.623071  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.723395  152982 cri.go:89] found id: ""
	I0826 12:11:36.723423  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.723431  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:36.723438  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:36.723503  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:36.761858  152982 cri.go:89] found id: ""
	I0826 12:11:36.761895  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.761906  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:36.761914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:36.761987  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:36.797265  152982 cri.go:89] found id: ""
	I0826 12:11:36.797297  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.797305  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:36.797315  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:36.797331  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.849263  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:36.849313  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:36.863273  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:36.863305  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:36.935214  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:36.935241  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:36.935259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:37.011799  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:37.011845  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.550075  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:39.563363  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:39.563441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:39.597015  152982 cri.go:89] found id: ""
	I0826 12:11:39.597049  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.597061  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:39.597068  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:39.597138  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:39.634936  152982 cri.go:89] found id: ""
	I0826 12:11:39.634976  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.634988  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:39.634996  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:39.635070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:39.670376  152982 cri.go:89] found id: ""
	I0826 12:11:39.670406  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.670414  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:39.670421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:39.670479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:39.706468  152982 cri.go:89] found id: ""
	I0826 12:11:39.706497  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.706504  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:39.706510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:39.706601  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:39.741133  152982 cri.go:89] found id: ""
	I0826 12:11:39.741166  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.741178  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:39.741187  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:39.741261  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:39.776398  152982 cri.go:89] found id: ""
	I0826 12:11:39.776436  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.776449  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:39.776460  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:39.776533  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:39.811257  152982 cri.go:89] found id: ""
	I0826 12:11:39.811291  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.811305  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:39.811314  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:39.811394  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:39.845825  152982 cri.go:89] found id: ""
	I0826 12:11:39.845858  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.845880  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:39.845893  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:39.845912  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.886439  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:39.886481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:39.936942  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:39.936985  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:39.950459  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:39.950494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:40.022791  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:40.022820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:40.022851  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:39.705276  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.705617  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.253680  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.753495  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.122503  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:45.123917  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:42.602146  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:42.615049  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:42.615124  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:42.655338  152982 cri.go:89] found id: ""
	I0826 12:11:42.655369  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.655377  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:42.655383  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:42.655438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:42.692964  152982 cri.go:89] found id: ""
	I0826 12:11:42.693001  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.693012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:42.693020  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:42.693095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:42.730011  152982 cri.go:89] found id: ""
	I0826 12:11:42.730040  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.730049  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:42.730055  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:42.730119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:42.765304  152982 cri.go:89] found id: ""
	I0826 12:11:42.765333  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.765341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:42.765348  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:42.765406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:42.805860  152982 cri.go:89] found id: ""
	I0826 12:11:42.805900  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.805912  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:42.805921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:42.805984  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:42.844736  152982 cri.go:89] found id: ""
	I0826 12:11:42.844768  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.844779  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:42.844789  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:42.844855  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:42.879760  152982 cri.go:89] found id: ""
	I0826 12:11:42.879790  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.879801  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:42.879809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:42.879873  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:42.918512  152982 cri.go:89] found id: ""
	I0826 12:11:42.918580  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.918595  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:42.918619  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:42.918640  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:42.971381  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:42.971423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:42.986027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:42.986069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:43.058511  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:43.058533  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:43.058548  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:43.137904  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:43.137948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:45.683127  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:45.697237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:45.697323  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:45.737944  152982 cri.go:89] found id: ""
	I0826 12:11:45.737977  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.737989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:45.737997  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:45.738069  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:45.775940  152982 cri.go:89] found id: ""
	I0826 12:11:45.775972  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.775980  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:45.775991  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:45.776047  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:45.811609  152982 cri.go:89] found id: ""
	I0826 12:11:45.811647  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.811658  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:45.811666  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:45.811747  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:45.845566  152982 cri.go:89] found id: ""
	I0826 12:11:45.845600  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.845612  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:45.845620  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:45.845698  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:45.880243  152982 cri.go:89] found id: ""
	I0826 12:11:45.880287  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.880300  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:45.880310  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:45.880406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:45.916121  152982 cri.go:89] found id: ""
	I0826 12:11:45.916150  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.916161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:45.916170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:45.916242  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:45.950397  152982 cri.go:89] found id: ""
	I0826 12:11:45.950430  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.950441  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:45.950449  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:45.950524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:45.987306  152982 cri.go:89] found id: ""
	I0826 12:11:45.987350  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.987363  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:45.987394  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:45.987435  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:46.044580  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:46.044632  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:46.059612  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:46.059648  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:46.133348  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:46.133377  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:46.133396  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:46.217841  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:46.217890  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:44.203535  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.703738  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.252936  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.753329  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:47.623134  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:49.628072  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.758749  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:48.772086  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:48.772172  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:48.806520  152982 cri.go:89] found id: ""
	I0826 12:11:48.806552  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.806563  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:48.806573  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:48.806655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:48.844305  152982 cri.go:89] found id: ""
	I0826 12:11:48.844335  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.844343  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:48.844349  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:48.844409  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:48.882416  152982 cri.go:89] found id: ""
	I0826 12:11:48.882453  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.882462  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:48.882469  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:48.882523  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:48.917756  152982 cri.go:89] found id: ""
	I0826 12:11:48.917798  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.917811  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:48.917818  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:48.917882  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:48.951065  152982 cri.go:89] found id: ""
	I0826 12:11:48.951095  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.951107  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:48.951115  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:48.951185  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:48.984812  152982 cri.go:89] found id: ""
	I0826 12:11:48.984845  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.984857  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:48.984865  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:48.984935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:49.021449  152982 cri.go:89] found id: ""
	I0826 12:11:49.021483  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.021495  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:49.021505  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:49.021579  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:49.053543  152982 cri.go:89] found id: ""
	I0826 12:11:49.053584  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.053596  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:49.053609  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:49.053625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:49.107227  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:49.107269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:49.121370  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:49.121402  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:49.192279  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:49.192323  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:49.192342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:49.267817  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:49.267861  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:49.204182  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.204589  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:50.753778  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.753986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.122110  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:54.122701  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.805801  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:51.821042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:51.821119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:51.863950  152982 cri.go:89] found id: ""
	I0826 12:11:51.863986  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.863999  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:51.864007  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:51.864082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:51.910582  152982 cri.go:89] found id: ""
	I0826 12:11:51.910621  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.910633  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:51.910649  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:51.910708  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:51.946964  152982 cri.go:89] found id: ""
	I0826 12:11:51.947001  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.947014  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:51.947022  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:51.947095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:51.982892  152982 cri.go:89] found id: ""
	I0826 12:11:51.982926  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.982936  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:51.982944  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:51.983016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:52.017975  152982 cri.go:89] found id: ""
	I0826 12:11:52.018000  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.018009  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:52.018015  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:52.018082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:52.053286  152982 cri.go:89] found id: ""
	I0826 12:11:52.053315  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.053323  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:52.053329  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:52.053391  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:52.088088  152982 cri.go:89] found id: ""
	I0826 12:11:52.088131  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.088144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:52.088153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:52.088235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:52.125911  152982 cri.go:89] found id: ""
	I0826 12:11:52.125938  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.125955  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:52.125967  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:52.125984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:52.167172  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:52.167208  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:52.222819  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:52.222871  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:52.237609  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:52.237650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:52.312439  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:52.312473  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:52.312491  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:54.892552  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:54.907733  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:54.907827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:54.945009  152982 cri.go:89] found id: ""
	I0826 12:11:54.945040  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.945050  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:54.945057  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:54.945128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:54.987578  152982 cri.go:89] found id: ""
	I0826 12:11:54.987608  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.987619  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:54.987627  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:54.987702  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:55.021222  152982 cri.go:89] found id: ""
	I0826 12:11:55.021254  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.021266  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:55.021274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:55.021348  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:55.058906  152982 cri.go:89] found id: ""
	I0826 12:11:55.058933  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.058941  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:55.058948  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:55.059017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:55.094689  152982 cri.go:89] found id: ""
	I0826 12:11:55.094720  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.094727  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:55.094734  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:55.094808  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:55.133269  152982 cri.go:89] found id: ""
	I0826 12:11:55.133298  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.133306  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:55.133313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:55.133376  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:55.170456  152982 cri.go:89] found id: ""
	I0826 12:11:55.170491  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.170501  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:55.170510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:55.170584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:55.205421  152982 cri.go:89] found id: ""
	I0826 12:11:55.205453  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.205463  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:55.205474  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:55.205490  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:55.258635  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:55.258672  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:55.272799  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:55.272838  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:55.345916  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:55.345948  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:55.345966  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:55.421677  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:55.421716  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:53.205479  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.703014  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.704352  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.254310  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.753129  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:56.124191  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:58.622612  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.960895  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:57.974338  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:57.974429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:58.010914  152982 cri.go:89] found id: ""
	I0826 12:11:58.010946  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.010955  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:58.010966  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:58.011046  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:58.046393  152982 cri.go:89] found id: ""
	I0826 12:11:58.046437  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.046451  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:58.046457  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:58.046512  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:58.081967  152982 cri.go:89] found id: ""
	I0826 12:11:58.081999  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.082008  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:58.082014  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:58.082074  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:58.118301  152982 cri.go:89] found id: ""
	I0826 12:11:58.118333  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.118344  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:58.118352  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:58.118420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:58.154991  152982 cri.go:89] found id: ""
	I0826 12:11:58.155022  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.155030  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:58.155036  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:58.155095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:58.192768  152982 cri.go:89] found id: ""
	I0826 12:11:58.192814  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.192827  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:58.192836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:58.192911  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:58.230393  152982 cri.go:89] found id: ""
	I0826 12:11:58.230422  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.230433  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:58.230441  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:58.230510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:58.267899  152982 cri.go:89] found id: ""
	I0826 12:11:58.267935  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.267947  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:58.267959  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:58.267976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:58.357819  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:58.357866  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:58.405641  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:58.405682  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:58.458403  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:58.458446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:58.472170  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:58.472209  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:58.544141  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.044595  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:01.059636  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:01.059732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:01.099210  152982 cri.go:89] found id: ""
	I0826 12:12:01.099244  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.099252  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:01.099260  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:01.099315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:01.135865  152982 cri.go:89] found id: ""
	I0826 12:12:01.135895  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.135904  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:01.135915  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:01.135969  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:01.169745  152982 cri.go:89] found id: ""
	I0826 12:12:01.169775  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.169784  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:01.169790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:01.169844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:01.208386  152982 cri.go:89] found id: ""
	I0826 12:12:01.208419  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.208431  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:01.208440  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:01.208508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:01.250695  152982 cri.go:89] found id: ""
	I0826 12:12:01.250727  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.250738  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:01.250746  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:01.250821  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:01.284796  152982 cri.go:89] found id: ""
	I0826 12:12:01.284825  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.284838  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:01.284845  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:01.284904  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:01.318188  152982 cri.go:89] found id: ""
	I0826 12:12:01.318219  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.318233  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:01.318242  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:01.318313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:01.354986  152982 cri.go:89] found id: ""
	I0826 12:12:01.355024  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.355036  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:01.355055  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:01.355073  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:01.406575  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:01.406625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:01.421246  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:01.421299  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:01.500127  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.500160  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:01.500178  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:01.579560  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:01.579605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:00.202896  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.204136  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:59.758855  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.253583  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:01.123695  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:03.622227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.124292  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:04.138317  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:04.138406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:04.172150  152982 cri.go:89] found id: ""
	I0826 12:12:04.172185  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.172197  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:04.172205  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:04.172281  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:04.206215  152982 cri.go:89] found id: ""
	I0826 12:12:04.206245  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.206253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:04.206259  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:04.206314  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:04.245728  152982 cri.go:89] found id: ""
	I0826 12:12:04.245766  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.245780  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:04.245797  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:04.245875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:04.288292  152982 cri.go:89] found id: ""
	I0826 12:12:04.288328  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.288341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:04.288358  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:04.288420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:04.323224  152982 cri.go:89] found id: ""
	I0826 12:12:04.323270  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.323279  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:04.323285  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:04.323353  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:04.356637  152982 cri.go:89] found id: ""
	I0826 12:12:04.356670  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.356681  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:04.356751  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:04.356829  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:04.397159  152982 cri.go:89] found id: ""
	I0826 12:12:04.397202  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.397217  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:04.397225  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:04.397307  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:04.443593  152982 cri.go:89] found id: ""
	I0826 12:12:04.443635  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.443644  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:04.443654  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:04.443667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:04.527790  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:04.527820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:04.527840  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:04.603384  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:04.603426  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:04.642782  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:04.642818  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:04.692196  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:04.692239  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:04.704890  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.204192  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.753969  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.253318  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:09.253759  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:06.123014  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:08.622705  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.208845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:07.221853  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:07.221925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:07.257184  152982 cri.go:89] found id: ""
	I0826 12:12:07.257220  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.257236  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:07.257244  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:07.257313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:07.289962  152982 cri.go:89] found id: ""
	I0826 12:12:07.290000  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.290012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:07.290018  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:07.290082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:07.323408  152982 cri.go:89] found id: ""
	I0826 12:12:07.323440  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.323452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:07.323461  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:07.323527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:07.358324  152982 cri.go:89] found id: ""
	I0826 12:12:07.358353  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.358362  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:07.358368  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:07.358436  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:07.393608  152982 cri.go:89] found id: ""
	I0826 12:12:07.393657  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.393666  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:07.393671  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:07.393739  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:07.427738  152982 cri.go:89] found id: ""
	I0826 12:12:07.427772  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.427782  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:07.427790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:07.427879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:07.466467  152982 cri.go:89] found id: ""
	I0826 12:12:07.466508  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.466520  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:07.466528  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:07.466603  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:07.501589  152982 cri.go:89] found id: ""
	I0826 12:12:07.501630  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.501645  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:07.501658  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:07.501678  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:07.550668  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:07.550708  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:07.564191  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:07.564224  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:07.638593  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:07.638626  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:07.638645  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:07.722262  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:07.722311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:10.265369  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:10.278719  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:10.278807  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:10.314533  152982 cri.go:89] found id: ""
	I0826 12:12:10.314568  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.314581  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:10.314589  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:10.314664  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:10.355983  152982 cri.go:89] found id: ""
	I0826 12:12:10.356014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.356023  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:10.356029  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:10.356091  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:10.391815  152982 cri.go:89] found id: ""
	I0826 12:12:10.391850  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.391860  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:10.391867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:10.391933  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:10.430280  152982 cri.go:89] found id: ""
	I0826 12:12:10.430309  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.430318  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:10.430324  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:10.430383  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:10.467983  152982 cri.go:89] found id: ""
	I0826 12:12:10.468014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.468025  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:10.468034  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:10.468103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:10.501682  152982 cri.go:89] found id: ""
	I0826 12:12:10.501712  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.501720  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:10.501726  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:10.501779  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:10.536760  152982 cri.go:89] found id: ""
	I0826 12:12:10.536790  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.536802  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:10.536810  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:10.536885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:10.572626  152982 cri.go:89] found id: ""
	I0826 12:12:10.572663  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.572677  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:10.572690  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:10.572707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:10.628207  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:10.628242  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:10.641767  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:10.641799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:10.716431  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:10.716463  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:10.716481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:10.801367  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:10.801416  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:09.205156  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.704152  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.754090  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:14.253111  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.122118  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.123368  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:15.623046  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.346625  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:13.359838  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:13.359925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:13.393199  152982 cri.go:89] found id: ""
	I0826 12:12:13.393228  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.393241  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:13.393249  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:13.393321  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:13.429651  152982 cri.go:89] found id: ""
	I0826 12:12:13.429696  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.429709  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:13.429718  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:13.429778  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:13.463913  152982 cri.go:89] found id: ""
	I0826 12:12:13.463947  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.463959  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:13.463967  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:13.464035  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:13.498933  152982 cri.go:89] found id: ""
	I0826 12:12:13.498966  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.498977  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:13.498987  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:13.499064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:13.535136  152982 cri.go:89] found id: ""
	I0826 12:12:13.535166  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.535177  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:13.535185  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:13.535260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:13.573468  152982 cri.go:89] found id: ""
	I0826 12:12:13.573504  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.573516  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:13.573525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:13.573597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:13.612852  152982 cri.go:89] found id: ""
	I0826 12:12:13.612900  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.612913  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:13.612921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:13.612994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:13.649176  152982 cri.go:89] found id: ""
	I0826 12:12:13.649204  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.649220  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:13.649230  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:13.649247  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:13.663880  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:13.663908  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:13.741960  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:13.741982  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:13.741999  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:13.829286  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:13.829342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:13.868186  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:13.868218  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.422802  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:16.436680  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:16.436759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:16.471551  152982 cri.go:89] found id: ""
	I0826 12:12:16.471585  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.471605  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:16.471623  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:16.471695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:16.507468  152982 cri.go:89] found id: ""
	I0826 12:12:16.507504  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.507517  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:16.507526  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:16.507600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:16.542283  152982 cri.go:89] found id: ""
	I0826 12:12:16.542314  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.542325  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:16.542336  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:16.542406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:16.590390  152982 cri.go:89] found id: ""
	I0826 12:12:16.590429  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.590443  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:16.590452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:16.590593  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:16.625344  152982 cri.go:89] found id: ""
	I0826 12:12:16.625371  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.625382  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:16.625389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:16.625463  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:16.660153  152982 cri.go:89] found id: ""
	I0826 12:12:16.660194  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.660204  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:16.660211  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:16.660268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:16.696541  152982 cri.go:89] found id: ""
	I0826 12:12:16.696572  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.696580  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:16.696586  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:16.696655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:14.202993  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.204125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.255066  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:18.752641  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:17.624099  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.122254  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.732416  152982 cri.go:89] found id: ""
	I0826 12:12:16.732448  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.732456  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:16.732469  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:16.732486  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:16.809058  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:16.809106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:16.848200  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:16.848269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.904985  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:16.905033  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:16.918966  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:16.919000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:16.989371  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.490349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:19.502851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:19.502946  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:19.534939  152982 cri.go:89] found id: ""
	I0826 12:12:19.534966  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.534974  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:19.534981  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:19.535036  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:19.567128  152982 cri.go:89] found id: ""
	I0826 12:12:19.567161  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.567177  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:19.567185  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:19.567257  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:19.601548  152982 cri.go:89] found id: ""
	I0826 12:12:19.601580  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.601590  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:19.601598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:19.601670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:19.636903  152982 cri.go:89] found id: ""
	I0826 12:12:19.636930  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.636938  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:19.636949  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:19.637018  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:19.670155  152982 cri.go:89] found id: ""
	I0826 12:12:19.670181  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.670190  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:19.670196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:19.670258  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:19.705052  152982 cri.go:89] found id: ""
	I0826 12:12:19.705079  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.705090  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:19.705099  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:19.705163  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:19.744106  152982 cri.go:89] found id: ""
	I0826 12:12:19.744136  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.744144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:19.744151  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:19.744227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:19.780078  152982 cri.go:89] found id: ""
	I0826 12:12:19.780107  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.780116  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:19.780126  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:19.780138  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:19.831821  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:19.831884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:19.847572  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:19.847610  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:19.924723  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.924745  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:19.924763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:20.001249  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:20.001292  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:18.204529  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.205670  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.703658  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.753284  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.753357  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.122490  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:24.122773  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.540357  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:22.554408  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:22.554483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:22.588270  152982 cri.go:89] found id: ""
	I0826 12:12:22.588298  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.588310  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:22.588329  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:22.588411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:22.623979  152982 cri.go:89] found id: ""
	I0826 12:12:22.624003  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.624011  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:22.624016  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:22.624077  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:22.657151  152982 cri.go:89] found id: ""
	I0826 12:12:22.657185  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.657196  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:22.657204  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:22.657265  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:22.694187  152982 cri.go:89] found id: ""
	I0826 12:12:22.694217  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.694229  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:22.694237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:22.694327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:22.734911  152982 cri.go:89] found id: ""
	I0826 12:12:22.734948  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.734960  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:22.734968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:22.735039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:22.772754  152982 cri.go:89] found id: ""
	I0826 12:12:22.772790  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.772802  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:22.772809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:22.772877  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:22.810340  152982 cri.go:89] found id: ""
	I0826 12:12:22.810376  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.810385  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:22.810392  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:22.810467  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:22.847910  152982 cri.go:89] found id: ""
	I0826 12:12:22.847942  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.847953  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:22.847966  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:22.847984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:22.900871  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:22.900927  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:22.914758  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:22.914790  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:22.981736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:22.981766  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:22.981780  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:23.062669  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:23.062717  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:25.604600  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:25.617474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:25.617584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:25.653870  152982 cri.go:89] found id: ""
	I0826 12:12:25.653904  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.653917  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:25.653925  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:25.653993  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:25.693937  152982 cri.go:89] found id: ""
	I0826 12:12:25.693965  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.693973  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:25.693979  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:25.694039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:25.730590  152982 cri.go:89] found id: ""
	I0826 12:12:25.730622  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.730633  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:25.730640  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:25.730729  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:25.768192  152982 cri.go:89] found id: ""
	I0826 12:12:25.768221  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.768231  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:25.768240  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:25.768296  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:25.808518  152982 cri.go:89] found id: ""
	I0826 12:12:25.808545  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.808553  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:25.808559  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:25.808622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:25.843434  152982 cri.go:89] found id: ""
	I0826 12:12:25.843464  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.843475  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:25.843487  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:25.843561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:25.879093  152982 cri.go:89] found id: ""
	I0826 12:12:25.879124  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.879138  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:25.879146  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:25.879212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:25.915871  152982 cri.go:89] found id: ""
	I0826 12:12:25.915919  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.915932  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:25.915945  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:25.915973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:25.998597  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:25.998652  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:26.038701  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:26.038736  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:26.091618  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:26.091665  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:26.105349  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:26.105383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:26.178337  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:24.704209  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.204036  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:25.253322  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.754717  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:26.123520  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.622019  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.622453  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.679177  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:28.695361  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:28.695455  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:28.734977  152982 cri.go:89] found id: ""
	I0826 12:12:28.735008  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.735026  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:28.735032  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:28.735107  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:28.771634  152982 cri.go:89] found id: ""
	I0826 12:12:28.771665  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.771677  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:28.771685  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:28.771763  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:28.810976  152982 cri.go:89] found id: ""
	I0826 12:12:28.811010  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.811022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:28.811030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:28.811098  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:28.850204  152982 cri.go:89] found id: ""
	I0826 12:12:28.850233  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.850241  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:28.850247  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:28.850300  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:28.888814  152982 cri.go:89] found id: ""
	I0826 12:12:28.888845  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.888852  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:28.888862  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:28.888923  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:28.925203  152982 cri.go:89] found id: ""
	I0826 12:12:28.925251  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.925264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:28.925273  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:28.925359  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:28.963656  152982 cri.go:89] found id: ""
	I0826 12:12:28.963684  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.963700  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:28.963706  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:28.963761  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:28.997644  152982 cri.go:89] found id: ""
	I0826 12:12:28.997677  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.997686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:28.997696  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:28.997711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:29.036668  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:29.036711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:29.089020  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:29.089064  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:29.103051  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:29.103083  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:29.173327  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:29.173363  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:29.173380  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:29.703493  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.709124  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.252850  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:32.254087  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:33.121656  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:35.122979  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.755073  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:31.769098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:31.769194  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:31.811919  152982 cri.go:89] found id: ""
	I0826 12:12:31.811950  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.811970  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:31.811978  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:31.812059  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:31.849728  152982 cri.go:89] found id: ""
	I0826 12:12:31.849760  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.849771  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:31.849778  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:31.849844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:31.884973  152982 cri.go:89] found id: ""
	I0826 12:12:31.885013  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.885022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:31.885030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:31.885088  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:31.925013  152982 cri.go:89] found id: ""
	I0826 12:12:31.925043  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.925052  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:31.925060  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:31.925121  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:31.960066  152982 cri.go:89] found id: ""
	I0826 12:12:31.960101  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.960112  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:31.960130  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:31.960205  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:31.994706  152982 cri.go:89] found id: ""
	I0826 12:12:31.994739  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.994747  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:31.994753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:31.994810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:32.030101  152982 cri.go:89] found id: ""
	I0826 12:12:32.030134  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.030142  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:32.030148  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:32.030213  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:32.064056  152982 cri.go:89] found id: ""
	I0826 12:12:32.064087  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.064095  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:32.064105  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:32.064118  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:32.115930  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:32.115974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:32.144522  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:32.144594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:32.216857  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:32.216886  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:32.216946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:32.293229  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:32.293268  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.833049  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:34.846325  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:34.846389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:34.879253  152982 cri.go:89] found id: ""
	I0826 12:12:34.879282  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.879299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:34.879308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:34.879377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:34.913351  152982 cri.go:89] found id: ""
	I0826 12:12:34.913381  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.913393  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:34.913401  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:34.913487  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:34.946929  152982 cri.go:89] found id: ""
	I0826 12:12:34.946958  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.946966  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:34.946972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:34.947040  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:34.980517  152982 cri.go:89] found id: ""
	I0826 12:12:34.980559  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.980571  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:34.980580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:34.980651  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:35.015853  152982 cri.go:89] found id: ""
	I0826 12:12:35.015886  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.015894  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:35.015909  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:35.015972  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:35.053568  152982 cri.go:89] found id: ""
	I0826 12:12:35.053597  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.053606  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:35.053613  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:35.053667  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:35.091369  152982 cri.go:89] found id: ""
	I0826 12:12:35.091398  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.091408  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:35.091415  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:35.091483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:35.129233  152982 cri.go:89] found id: ""
	I0826 12:12:35.129259  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.129267  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:35.129276  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:35.129288  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:35.181977  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:35.182016  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:35.195780  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:35.195812  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:35.274390  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:35.274416  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:35.274433  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:35.353774  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:35.353819  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.203244  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:36.703229  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:34.754010  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.253336  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.253674  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.622257  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.622967  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.894664  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:37.908390  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:37.908480  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:37.943642  152982 cri.go:89] found id: ""
	I0826 12:12:37.943669  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.943681  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:37.943689  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:37.943759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:37.978371  152982 cri.go:89] found id: ""
	I0826 12:12:37.978407  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.978418  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:37.978426  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:37.978497  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:38.014205  152982 cri.go:89] found id: ""
	I0826 12:12:38.014237  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.014248  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:38.014255  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:38.014326  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:38.048705  152982 cri.go:89] found id: ""
	I0826 12:12:38.048737  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.048748  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:38.048758  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:38.048824  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:38.085009  152982 cri.go:89] found id: ""
	I0826 12:12:38.085039  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.085050  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:38.085058  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:38.085147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:38.125923  152982 cri.go:89] found id: ""
	I0826 12:12:38.125949  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.125960  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:38.125968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:38.126038  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:38.161460  152982 cri.go:89] found id: ""
	I0826 12:12:38.161492  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.161504  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:38.161512  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:38.161584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:38.194433  152982 cri.go:89] found id: ""
	I0826 12:12:38.194462  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.194472  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:38.194481  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:38.194494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.245809  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:38.245854  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:38.261100  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:38.261141  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:38.329187  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:38.329218  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:38.329237  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:38.416798  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:38.416844  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:40.962763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:40.976214  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:40.976287  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:41.010312  152982 cri.go:89] found id: ""
	I0826 12:12:41.010346  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.010356  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:41.010363  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:41.010422  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:41.051708  152982 cri.go:89] found id: ""
	I0826 12:12:41.051738  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.051746  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:41.051753  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:41.051818  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:41.087107  152982 cri.go:89] found id: ""
	I0826 12:12:41.087140  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.087152  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:41.087161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:41.087238  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:41.125099  152982 cri.go:89] found id: ""
	I0826 12:12:41.125132  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.125144  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:41.125153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:41.125216  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:41.160192  152982 cri.go:89] found id: ""
	I0826 12:12:41.160220  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.160227  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:41.160234  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:41.160291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:41.193507  152982 cri.go:89] found id: ""
	I0826 12:12:41.193536  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.193548  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:41.193557  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:41.193650  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:41.235788  152982 cri.go:89] found id: ""
	I0826 12:12:41.235827  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.235835  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:41.235841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:41.235897  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:41.271720  152982 cri.go:89] found id: ""
	I0826 12:12:41.271755  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.271770  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:41.271780  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:41.271794  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:41.285694  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:41.285731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:41.351221  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:41.351245  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:41.351261  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:41.434748  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:41.434792  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:41.472446  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:41.472477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.704389  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.204525  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.752919  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:43.753710  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:42.123210  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.623786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.022222  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:44.036128  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:44.036201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:44.071142  152982 cri.go:89] found id: ""
	I0826 12:12:44.071177  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.071187  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:44.071196  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:44.071267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:44.105068  152982 cri.go:89] found id: ""
	I0826 12:12:44.105101  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.105110  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:44.105116  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:44.105184  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:44.140069  152982 cri.go:89] found id: ""
	I0826 12:12:44.140102  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.140113  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:44.140121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:44.140190  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:44.177686  152982 cri.go:89] found id: ""
	I0826 12:12:44.177724  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.177736  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:44.177744  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:44.177819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:44.214326  152982 cri.go:89] found id: ""
	I0826 12:12:44.214356  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.214364  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:44.214371  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:44.214426  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:44.251675  152982 cri.go:89] found id: ""
	I0826 12:12:44.251703  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.251711  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:44.251718  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:44.251776  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:44.303077  152982 cri.go:89] found id: ""
	I0826 12:12:44.303107  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.303116  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:44.303122  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:44.303183  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:44.355913  152982 cri.go:89] found id: ""
	I0826 12:12:44.355944  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.355952  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:44.355962  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:44.355974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:44.421610  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:44.421653  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:44.435567  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:44.435603  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:44.501406  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:44.501427  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:44.501440  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:44.582723  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:44.582763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:43.703519  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.202958  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.253330  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:48.753043  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.122857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:49.621786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.124026  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:47.139183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:47.139260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:47.175395  152982 cri.go:89] found id: ""
	I0826 12:12:47.175424  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.175440  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:47.175450  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:47.175514  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:47.214536  152982 cri.go:89] found id: ""
	I0826 12:12:47.214568  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.214580  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:47.214588  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:47.214655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:47.255297  152982 cri.go:89] found id: ""
	I0826 12:12:47.255321  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.255329  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:47.255335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:47.255402  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:47.290638  152982 cri.go:89] found id: ""
	I0826 12:12:47.290666  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.290675  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:47.290681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:47.290736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:47.327313  152982 cri.go:89] found id: ""
	I0826 12:12:47.327345  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.327352  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:47.327359  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:47.327425  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:47.366221  152982 cri.go:89] found id: ""
	I0826 12:12:47.366256  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.366264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:47.366274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:47.366331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:47.401043  152982 cri.go:89] found id: ""
	I0826 12:12:47.401077  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.401088  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:47.401095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:47.401166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:47.435800  152982 cri.go:89] found id: ""
	I0826 12:12:47.435837  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.435848  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:47.435860  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:47.435881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:47.487917  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:47.487955  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:47.501696  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:47.501731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:47.569026  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:47.569053  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:47.569069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:47.651002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:47.651049  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:50.192329  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:50.213937  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:50.214017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:50.253835  152982 cri.go:89] found id: ""
	I0826 12:12:50.253868  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.253879  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:50.253890  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:50.253957  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:50.296898  152982 cri.go:89] found id: ""
	I0826 12:12:50.296928  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.296939  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:50.296946  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:50.297016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:50.350327  152982 cri.go:89] found id: ""
	I0826 12:12:50.350356  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.350365  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:50.350375  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:50.350443  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:50.385191  152982 cri.go:89] found id: ""
	I0826 12:12:50.385225  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.385236  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:50.385243  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:50.385309  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:50.418371  152982 cri.go:89] found id: ""
	I0826 12:12:50.418412  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.418423  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:50.418432  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:50.418505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:50.450924  152982 cri.go:89] found id: ""
	I0826 12:12:50.450956  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.450965  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:50.450972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:50.451043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:50.485695  152982 cri.go:89] found id: ""
	I0826 12:12:50.485728  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.485739  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:50.485748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:50.485819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:50.519570  152982 cri.go:89] found id: ""
	I0826 12:12:50.519609  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.519622  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:50.519633  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:50.519650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:50.572959  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:50.573001  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:50.586794  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:50.586826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:50.654148  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:50.654180  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:50.654255  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:50.738067  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:50.738107  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:48.203018  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.205528  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.704054  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.758038  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.252772  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.121906  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:54.622553  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.281246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:53.296023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:53.296103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:53.333031  152982 cri.go:89] found id: ""
	I0826 12:12:53.333073  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.333092  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:53.333100  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:53.333171  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:53.367753  152982 cri.go:89] found id: ""
	I0826 12:12:53.367782  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.367791  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:53.367796  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:53.367849  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:53.403702  152982 cri.go:89] found id: ""
	I0826 12:12:53.403733  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.403745  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:53.403753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:53.403823  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:53.439911  152982 cri.go:89] found id: ""
	I0826 12:12:53.439939  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.439947  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:53.439953  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:53.440008  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:53.475053  152982 cri.go:89] found id: ""
	I0826 12:12:53.475079  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.475088  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:53.475094  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:53.475152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:53.509087  152982 cri.go:89] found id: ""
	I0826 12:12:53.509117  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.509128  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:53.509136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:53.509207  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:53.546090  152982 cri.go:89] found id: ""
	I0826 12:12:53.546123  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.546133  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:53.546139  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:53.546195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:53.581675  152982 cri.go:89] found id: ""
	I0826 12:12:53.581713  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.581727  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:53.581741  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:53.581756  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:53.632866  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:53.632929  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:53.646045  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:53.646079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:53.716768  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:53.716798  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:53.716814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:53.799490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:53.799541  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.340389  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:56.353305  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:56.353377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:56.389690  152982 cri.go:89] found id: ""
	I0826 12:12:56.389725  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.389733  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:56.389741  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:56.389797  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:56.423214  152982 cri.go:89] found id: ""
	I0826 12:12:56.423245  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.423253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:56.423260  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:56.423315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:56.459033  152982 cri.go:89] found id: ""
	I0826 12:12:56.459069  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.459077  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:56.459083  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:56.459141  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:56.494408  152982 cri.go:89] found id: ""
	I0826 12:12:56.494437  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.494446  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:56.494453  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:56.494507  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:56.533471  152982 cri.go:89] found id: ""
	I0826 12:12:56.533506  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.533517  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:56.533525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:56.533595  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:56.572644  152982 cri.go:89] found id: ""
	I0826 12:12:56.572675  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.572685  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:56.572690  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:56.572769  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:56.610948  152982 cri.go:89] found id: ""
	I0826 12:12:56.610978  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.610989  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:56.610997  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:56.611161  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:56.651352  152982 cri.go:89] found id: ""
	I0826 12:12:56.651391  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.651406  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:56.651419  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:56.651446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:56.666627  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:56.666664  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 12:12:54.704640  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:56.704830  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:55.253572  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.754403  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.122603  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.623004  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	W0826 12:12:56.741054  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:56.741087  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:56.741106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:56.818138  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:56.818194  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.858182  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:56.858216  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.412428  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:59.426340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:59.426410  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:59.459975  152982 cri.go:89] found id: ""
	I0826 12:12:59.460011  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.460021  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:59.460027  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:59.460082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:59.491890  152982 cri.go:89] found id: ""
	I0826 12:12:59.491918  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.491928  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:59.491934  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:59.491994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:59.527284  152982 cri.go:89] found id: ""
	I0826 12:12:59.527318  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.527330  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:59.527339  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:59.527411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:59.560996  152982 cri.go:89] found id: ""
	I0826 12:12:59.561027  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.561036  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:59.561042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:59.561096  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:59.595827  152982 cri.go:89] found id: ""
	I0826 12:12:59.595858  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.595866  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:59.595882  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:59.595970  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:59.632943  152982 cri.go:89] found id: ""
	I0826 12:12:59.632981  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.632993  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:59.633001  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:59.633071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:59.669013  152982 cri.go:89] found id: ""
	I0826 12:12:59.669047  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.669057  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:59.669065  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:59.669139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:59.703286  152982 cri.go:89] found id: ""
	I0826 12:12:59.703320  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.703331  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:59.703342  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:59.703359  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.756848  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:59.756882  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:59.770551  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:59.770592  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:59.842153  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:59.842176  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:59.842190  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:59.925190  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:59.925231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:59.203898  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.703960  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.755160  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.252684  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.253046  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.623605  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.122069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.464977  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:02.478901  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:02.478991  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:02.514845  152982 cri.go:89] found id: ""
	I0826 12:13:02.514890  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.514903  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:02.514912  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:02.514980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:02.550867  152982 cri.go:89] found id: ""
	I0826 12:13:02.550899  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.550910  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:02.550918  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:02.550988  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:02.585494  152982 cri.go:89] found id: ""
	I0826 12:13:02.585522  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.585531  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:02.585537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:02.585617  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:02.623561  152982 cri.go:89] found id: ""
	I0826 12:13:02.623603  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.623619  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:02.623630  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:02.623696  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:02.660636  152982 cri.go:89] found id: ""
	I0826 12:13:02.660665  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.660675  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:02.660683  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:02.660760  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:02.696140  152982 cri.go:89] found id: ""
	I0826 12:13:02.696173  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.696184  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:02.696192  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:02.696260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:02.735056  152982 cri.go:89] found id: ""
	I0826 12:13:02.735098  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.735111  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:02.735121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:02.735201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:02.770841  152982 cri.go:89] found id: ""
	I0826 12:13:02.770886  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.770899  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:02.770911  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:02.770928  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:02.845458  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:02.845498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:02.885537  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:02.885574  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:02.935507  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:02.935560  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:02.950010  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:02.950046  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:03.018963  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.520071  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:05.535473  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:05.535554  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:05.572890  152982 cri.go:89] found id: ""
	I0826 12:13:05.572923  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.572934  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:05.572942  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:05.573019  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:05.610469  152982 cri.go:89] found id: ""
	I0826 12:13:05.610503  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.610515  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:05.610522  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:05.610586  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:05.647446  152982 cri.go:89] found id: ""
	I0826 12:13:05.647480  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.647489  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:05.647495  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:05.647561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:05.686619  152982 cri.go:89] found id: ""
	I0826 12:13:05.686660  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.686672  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:05.686681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:05.686754  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:05.725893  152982 cri.go:89] found id: ""
	I0826 12:13:05.725927  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.725936  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:05.725943  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:05.726034  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:05.761052  152982 cri.go:89] found id: ""
	I0826 12:13:05.761081  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.761089  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:05.761095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:05.761147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:05.795336  152982 cri.go:89] found id: ""
	I0826 12:13:05.795367  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.795379  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:05.795387  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:05.795447  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:05.834397  152982 cri.go:89] found id: ""
	I0826 12:13:05.834441  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.834449  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:05.834459  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:05.834472  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:05.847882  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:05.847919  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:05.921941  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.921965  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:05.921982  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:06.001380  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:06.001424  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:06.040519  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:06.040555  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:04.203987  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.704484  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.752615  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.753340  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.122654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.122742  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.123434  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.591761  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:08.604628  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:08.604724  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:08.639915  152982 cri.go:89] found id: ""
	I0826 12:13:08.639948  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.639957  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:08.639963  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:08.640025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:08.684479  152982 cri.go:89] found id: ""
	I0826 12:13:08.684513  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.684526  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:08.684535  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:08.684613  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:08.724083  152982 cri.go:89] found id: ""
	I0826 12:13:08.724112  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.724121  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:08.724127  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:08.724182  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:08.760781  152982 cri.go:89] found id: ""
	I0826 12:13:08.760830  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.760842  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:08.760851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:08.760943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:08.795685  152982 cri.go:89] found id: ""
	I0826 12:13:08.795715  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.795723  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:08.795730  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:08.795786  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:08.832123  152982 cri.go:89] found id: ""
	I0826 12:13:08.832152  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.832161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:08.832167  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:08.832227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:08.869701  152982 cri.go:89] found id: ""
	I0826 12:13:08.869735  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.869752  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:08.869760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:08.869827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:08.905399  152982 cri.go:89] found id: ""
	I0826 12:13:08.905444  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.905455  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:08.905469  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:08.905485  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:08.956814  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:08.956857  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:08.971618  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:08.971656  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:09.039360  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:09.039389  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:09.039407  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:09.113464  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:09.113509  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:11.658989  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:11.671816  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:11.671898  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:11.707124  152982 cri.go:89] found id: ""
	I0826 12:13:11.707150  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.707158  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:11.707165  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:11.707230  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:09.203816  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.203914  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.757254  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:13.252482  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:12.624138  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.123672  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.743127  152982 cri.go:89] found id: ""
	I0826 12:13:11.743155  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.743163  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:11.743169  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:11.743249  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:11.777664  152982 cri.go:89] found id: ""
	I0826 12:13:11.777693  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.777701  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:11.777707  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:11.777766  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:11.811555  152982 cri.go:89] found id: ""
	I0826 12:13:11.811585  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.811593  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:11.811599  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:11.811658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:11.846187  152982 cri.go:89] found id: ""
	I0826 12:13:11.846216  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.846223  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:11.846229  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:11.846291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:11.882261  152982 cri.go:89] found id: ""
	I0826 12:13:11.882292  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.882310  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:11.882318  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:11.882386  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:11.920538  152982 cri.go:89] found id: ""
	I0826 12:13:11.920572  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.920583  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:11.920590  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:11.920658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:11.955402  152982 cri.go:89] found id: ""
	I0826 12:13:11.955435  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.955446  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:11.955456  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:11.955473  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:12.007676  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:12.007723  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:12.021378  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:12.021417  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:12.087841  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:12.087868  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:12.087883  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:12.170948  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:12.170991  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:14.712383  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:14.724904  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:14.724982  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:14.759675  152982 cri.go:89] found id: ""
	I0826 12:13:14.759703  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.759711  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:14.759717  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:14.759784  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:14.794440  152982 cri.go:89] found id: ""
	I0826 12:13:14.794471  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.794480  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:14.794488  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:14.794542  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:14.832392  152982 cri.go:89] found id: ""
	I0826 12:13:14.832442  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.832452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:14.832459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:14.832524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:14.870231  152982 cri.go:89] found id: ""
	I0826 12:13:14.870262  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.870273  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:14.870281  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:14.870339  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:14.909480  152982 cri.go:89] found id: ""
	I0826 12:13:14.909517  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.909529  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:14.909536  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:14.909596  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:14.950957  152982 cri.go:89] found id: ""
	I0826 12:13:14.950986  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.950997  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:14.951005  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:14.951071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:14.995137  152982 cri.go:89] found id: ""
	I0826 12:13:14.995165  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.995176  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:14.995183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:14.995252  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:15.029939  152982 cri.go:89] found id: ""
	I0826 12:13:15.029969  152982 logs.go:276] 0 containers: []
	W0826 12:13:15.029978  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:15.029987  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:15.030000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:15.106633  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:15.106675  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:15.152575  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:15.152613  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:15.205645  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:15.205689  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:15.220325  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:15.220369  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:15.289698  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:13.705307  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:16.203733  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.253098  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.253276  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.752313  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.621549  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.622504  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.790709  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:17.804332  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:17.804398  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:17.839735  152982 cri.go:89] found id: ""
	I0826 12:13:17.839779  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.839791  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:17.839803  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:17.839885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:17.875476  152982 cri.go:89] found id: ""
	I0826 12:13:17.875510  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.875521  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:17.875529  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:17.875606  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:17.911715  152982 cri.go:89] found id: ""
	I0826 12:13:17.911745  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.911753  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:17.911760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:17.911822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:17.949059  152982 cri.go:89] found id: ""
	I0826 12:13:17.949094  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.949102  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:17.949109  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:17.949166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:17.985319  152982 cri.go:89] found id: ""
	I0826 12:13:17.985365  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.985376  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:17.985385  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:17.985481  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:18.019796  152982 cri.go:89] found id: ""
	I0826 12:13:18.019839  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.019858  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:18.019867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:18.019931  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:18.053910  152982 cri.go:89] found id: ""
	I0826 12:13:18.053941  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.053953  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:18.053960  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:18.054039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:18.089854  152982 cri.go:89] found id: ""
	I0826 12:13:18.089888  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.089901  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:18.089917  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:18.089934  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:18.143026  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:18.143070  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.156710  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:18.156740  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:18.222894  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:18.222929  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:18.222946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:18.298729  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:18.298777  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:20.837506  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:20.851070  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:20.851152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:20.886253  152982 cri.go:89] found id: ""
	I0826 12:13:20.886289  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.886299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:20.886308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:20.886384  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:20.923773  152982 cri.go:89] found id: ""
	I0826 12:13:20.923803  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.923821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:20.923827  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:20.923884  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:20.959117  152982 cri.go:89] found id: ""
	I0826 12:13:20.959151  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.959162  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:20.959170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:20.959239  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:20.994088  152982 cri.go:89] found id: ""
	I0826 12:13:20.994121  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.994131  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:20.994138  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:20.994203  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:21.031140  152982 cri.go:89] found id: ""
	I0826 12:13:21.031171  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.031183  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:21.031198  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:21.031267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:21.064624  152982 cri.go:89] found id: ""
	I0826 12:13:21.064654  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.064666  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:21.064674  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:21.064743  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:21.100146  152982 cri.go:89] found id: ""
	I0826 12:13:21.100182  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.100190  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:21.100197  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:21.100268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:21.149001  152982 cri.go:89] found id: ""
	I0826 12:13:21.149031  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.149040  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:21.149054  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:21.149074  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:21.229783  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:21.229809  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:21.229826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:21.305579  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:21.305619  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:21.343856  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:21.343884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:21.394183  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:21.394231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.205132  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:20.704261  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:21.754167  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.253321  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:22.123356  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.621337  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:23.908368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:23.922748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:23.922840  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:23.964168  152982 cri.go:89] found id: ""
	I0826 12:13:23.964199  152982 logs.go:276] 0 containers: []
	W0826 12:13:23.964209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:23.964218  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:23.964290  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:24.001156  152982 cri.go:89] found id: ""
	I0826 12:13:24.001186  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.001199  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:24.001204  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:24.001268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:24.040001  152982 cri.go:89] found id: ""
	I0826 12:13:24.040037  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.040057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:24.040067  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:24.040139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:24.076901  152982 cri.go:89] found id: ""
	I0826 12:13:24.076940  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.076948  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:24.076955  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:24.077028  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:24.129347  152982 cri.go:89] found id: ""
	I0826 12:13:24.129375  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.129383  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:24.129389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:24.129457  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:24.169634  152982 cri.go:89] found id: ""
	I0826 12:13:24.169666  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.169678  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:24.169685  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:24.169740  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:24.206976  152982 cri.go:89] found id: ""
	I0826 12:13:24.207006  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.207015  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:24.207023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:24.207092  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:24.243755  152982 cri.go:89] found id: ""
	I0826 12:13:24.243790  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.243800  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:24.243812  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:24.243829  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:24.323085  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:24.323131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:24.362404  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:24.362436  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:24.411863  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:24.411910  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:24.425742  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:24.425776  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:24.492510  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:23.203855  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:25.704793  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.753722  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:28.753791  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.622857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:29.122053  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.993510  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:27.007233  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:27.007304  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:27.041360  152982 cri.go:89] found id: ""
	I0826 12:13:27.041392  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.041401  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:27.041407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:27.041470  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:27.076040  152982 cri.go:89] found id: ""
	I0826 12:13:27.076069  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.076080  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:27.076088  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:27.076160  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:27.114288  152982 cri.go:89] found id: ""
	I0826 12:13:27.114325  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.114336  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:27.114345  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:27.114418  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:27.148538  152982 cri.go:89] found id: ""
	I0826 12:13:27.148572  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.148582  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:27.148588  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:27.148665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:27.182331  152982 cri.go:89] found id: ""
	I0826 12:13:27.182362  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.182373  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:27.182382  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:27.182453  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:27.218645  152982 cri.go:89] found id: ""
	I0826 12:13:27.218698  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.218710  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:27.218720  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:27.218798  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:27.254987  152982 cri.go:89] found id: ""
	I0826 12:13:27.255021  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.255031  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:27.255037  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:27.255097  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:27.289633  152982 cri.go:89] found id: ""
	I0826 12:13:27.289662  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.289672  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:27.289683  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:27.289705  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:27.338387  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:27.338429  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:27.353764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:27.353799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:27.425833  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:27.425855  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:27.425870  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:27.507035  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:27.507078  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.047763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:30.063283  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:30.063382  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:30.100161  152982 cri.go:89] found id: ""
	I0826 12:13:30.100194  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.100207  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:30.100215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:30.100276  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:30.136507  152982 cri.go:89] found id: ""
	I0826 12:13:30.136542  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.136554  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:30.136561  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:30.136632  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:30.170023  152982 cri.go:89] found id: ""
	I0826 12:13:30.170058  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.170066  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:30.170071  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:30.170128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:30.204979  152982 cri.go:89] found id: ""
	I0826 12:13:30.205022  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.205032  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:30.205062  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:30.205135  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:30.242407  152982 cri.go:89] found id: ""
	I0826 12:13:30.242442  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.242455  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:30.242463  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:30.242532  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:30.280569  152982 cri.go:89] found id: ""
	I0826 12:13:30.280607  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.280619  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:30.280627  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:30.280684  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:30.317846  152982 cri.go:89] found id: ""
	I0826 12:13:30.317882  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.317892  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:30.317906  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:30.318011  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:30.354637  152982 cri.go:89] found id: ""
	I0826 12:13:30.354675  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.354686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:30.354698  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:30.354715  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:30.434983  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:30.435032  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.474170  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:30.474214  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:30.541092  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:30.541133  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:30.566671  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:30.566707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:30.659622  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:28.203031  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.204134  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:32.703767  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.754563  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.253557  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:31.122121  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.125357  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.622870  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.160831  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:33.174476  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:33.174556  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:33.213402  152982 cri.go:89] found id: ""
	I0826 12:13:33.213433  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.213441  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:33.213447  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:33.213505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:33.251024  152982 cri.go:89] found id: ""
	I0826 12:13:33.251056  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.251064  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:33.251070  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:33.251134  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:33.288839  152982 cri.go:89] found id: ""
	I0826 12:13:33.288873  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.288882  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:33.288889  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:33.288961  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:33.324289  152982 cri.go:89] found id: ""
	I0826 12:13:33.324321  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.324329  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:33.324335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:33.324404  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:33.358921  152982 cri.go:89] found id: ""
	I0826 12:13:33.358953  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.358961  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:33.358968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:33.359025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:33.394579  152982 cri.go:89] found id: ""
	I0826 12:13:33.394615  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.394623  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:33.394629  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:33.394700  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:33.429750  152982 cri.go:89] found id: ""
	I0826 12:13:33.429782  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.429794  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:33.429802  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:33.429863  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:33.465857  152982 cri.go:89] found id: ""
	I0826 12:13:33.465895  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.465908  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:33.465921  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:33.465939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:33.506312  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:33.506344  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:33.557235  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:33.557279  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:33.570259  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:33.570293  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:33.638927  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:33.638952  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:33.638973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:36.217153  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:36.230544  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:36.230630  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:36.283359  152982 cri.go:89] found id: ""
	I0826 12:13:36.283394  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.283405  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:36.283413  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:36.283486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:36.327991  152982 cri.go:89] found id: ""
	I0826 12:13:36.328017  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.328026  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:36.328031  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:36.328095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:36.380106  152982 cri.go:89] found id: ""
	I0826 12:13:36.380137  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.380147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:36.380154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:36.380212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:36.415844  152982 cri.go:89] found id: ""
	I0826 12:13:36.415872  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.415880  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:36.415886  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:36.415939  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:36.451058  152982 cri.go:89] found id: ""
	I0826 12:13:36.451131  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.451158  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:36.451168  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:36.451235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:36.485814  152982 cri.go:89] found id: ""
	I0826 12:13:36.485845  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.485856  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:36.485864  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:36.485943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:36.520811  152982 cri.go:89] found id: ""
	I0826 12:13:36.520848  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.520865  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:36.520876  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:36.520952  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:36.557835  152982 cri.go:89] found id: ""
	I0826 12:13:36.557866  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.557877  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:36.557897  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:36.557915  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:36.609551  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:36.609594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:36.624424  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:36.624453  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:36.697267  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:36.697294  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:36.697312  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:34.704284  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.203717  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.752752  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:38.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.622907  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.121820  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:36.781810  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:36.781862  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.326306  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:39.340161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:39.340229  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:39.373614  152982 cri.go:89] found id: ""
	I0826 12:13:39.373646  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.373655  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:39.373664  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:39.373732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:39.408021  152982 cri.go:89] found id: ""
	I0826 12:13:39.408059  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.408067  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:39.408073  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:39.408127  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:39.450503  152982 cri.go:89] found id: ""
	I0826 12:13:39.450531  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.450541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:39.450549  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:39.450624  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:39.487553  152982 cri.go:89] found id: ""
	I0826 12:13:39.487585  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.487596  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:39.487625  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:39.487695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:39.524701  152982 cri.go:89] found id: ""
	I0826 12:13:39.524734  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.524745  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:39.524753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:39.524822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:39.557863  152982 cri.go:89] found id: ""
	I0826 12:13:39.557893  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.557903  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:39.557911  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:39.557979  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:39.593456  152982 cri.go:89] found id: ""
	I0826 12:13:39.593486  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.593496  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:39.593504  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:39.593577  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:39.628444  152982 cri.go:89] found id: ""
	I0826 12:13:39.628472  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.628481  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:39.628490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:39.628503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.668929  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:39.668967  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:39.724948  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:39.725003  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:39.740014  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:39.740060  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:39.814786  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:39.814811  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:39.814828  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:39.704050  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:41.704769  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.752827  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.753423  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.122285  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.622043  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.393781  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:42.407529  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:42.407620  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:42.444273  152982 cri.go:89] found id: ""
	I0826 12:13:42.444305  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.444314  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:42.444321  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:42.444389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:42.478683  152982 cri.go:89] found id: ""
	I0826 12:13:42.478724  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.478734  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:42.478741  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:42.478803  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:42.520650  152982 cri.go:89] found id: ""
	I0826 12:13:42.520684  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.520708  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:42.520715  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:42.520774  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:42.558610  152982 cri.go:89] found id: ""
	I0826 12:13:42.558656  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.558667  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:42.558677  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:42.558750  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:42.593960  152982 cri.go:89] found id: ""
	I0826 12:13:42.593991  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.593999  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:42.594006  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:42.594064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:42.628257  152982 cri.go:89] found id: ""
	I0826 12:13:42.628284  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.628294  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:42.628300  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:42.628372  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:42.669894  152982 cri.go:89] found id: ""
	I0826 12:13:42.669933  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.669946  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:42.669956  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:42.670029  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:42.707893  152982 cri.go:89] found id: ""
	I0826 12:13:42.707923  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.707934  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:42.707946  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:42.707962  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:42.760778  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:42.760823  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:42.773718  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:42.773753  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:42.855780  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:42.855813  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:42.855831  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:42.934872  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:42.934925  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:45.473505  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:45.488485  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:45.488582  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:45.524355  152982 cri.go:89] found id: ""
	I0826 12:13:45.524387  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.524398  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:45.524407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:45.524474  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:45.563731  152982 cri.go:89] found id: ""
	I0826 12:13:45.563758  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.563767  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:45.563772  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:45.563832  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:45.595876  152982 cri.go:89] found id: ""
	I0826 12:13:45.595910  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.595918  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:45.595924  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:45.595977  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:45.629212  152982 cri.go:89] found id: ""
	I0826 12:13:45.629246  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.629256  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:45.629262  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:45.629316  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:45.662718  152982 cri.go:89] found id: ""
	I0826 12:13:45.662748  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.662759  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:45.662766  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:45.662851  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:45.697540  152982 cri.go:89] found id: ""
	I0826 12:13:45.697573  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.697585  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:45.697598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:45.697670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:45.738012  152982 cri.go:89] found id: ""
	I0826 12:13:45.738054  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.738067  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:45.738077  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:45.738174  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:45.778322  152982 cri.go:89] found id: ""
	I0826 12:13:45.778352  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.778364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:45.778376  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:45.778395  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:45.830530  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:45.830570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:45.845289  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:45.845335  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:45.918490  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:45.918514  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:45.918528  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:45.998762  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:45.998806  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:44.204527  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.204789  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.753605  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.754396  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.255176  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.622584  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.122691  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:48.540076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:48.554537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:48.554616  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:48.589750  152982 cri.go:89] found id: ""
	I0826 12:13:48.589783  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.589792  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:48.589799  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:48.589866  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.628496  152982 cri.go:89] found id: ""
	I0826 12:13:48.628530  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.628540  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:48.628557  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:48.628635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:48.670630  152982 cri.go:89] found id: ""
	I0826 12:13:48.670667  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.670678  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:48.670686  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:48.670756  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:48.707510  152982 cri.go:89] found id: ""
	I0826 12:13:48.707543  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.707564  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:48.707572  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:48.707642  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:48.752189  152982 cri.go:89] found id: ""
	I0826 12:13:48.752222  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.752231  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:48.752237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:48.752306  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:48.788294  152982 cri.go:89] found id: ""
	I0826 12:13:48.788332  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.788356  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:48.788364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:48.788439  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:48.822728  152982 cri.go:89] found id: ""
	I0826 12:13:48.822755  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.822765  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:48.822771  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:48.822850  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:48.859237  152982 cri.go:89] found id: ""
	I0826 12:13:48.859270  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.859280  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:48.859293  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:48.859310  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:48.944271  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:48.944322  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:48.983438  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:48.983477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:49.036463  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:49.036511  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:49.051081  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:49.051123  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:49.127953  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:51.629023  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:51.643644  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:51.643728  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:51.684273  152982 cri.go:89] found id: ""
	I0826 12:13:51.684310  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.684323  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:51.684331  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:51.684401  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.703794  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:50.703872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:52.705329  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.753669  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.252960  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.623221  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.121867  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.720561  152982 cri.go:89] found id: ""
	I0826 12:13:51.720600  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.720610  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:51.720616  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:51.720690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:51.758023  152982 cri.go:89] found id: ""
	I0826 12:13:51.758049  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.758057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:51.758063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:51.758123  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:51.797029  152982 cri.go:89] found id: ""
	I0826 12:13:51.797063  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.797075  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:51.797082  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:51.797150  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:51.832002  152982 cri.go:89] found id: ""
	I0826 12:13:51.832032  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.832043  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:51.832051  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:51.832122  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:51.867042  152982 cri.go:89] found id: ""
	I0826 12:13:51.867074  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.867083  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:51.867090  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:51.867155  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:51.904887  152982 cri.go:89] found id: ""
	I0826 12:13:51.904919  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.904931  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:51.904938  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:51.905005  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:51.940628  152982 cri.go:89] found id: ""
	I0826 12:13:51.940662  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.940674  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:51.940686  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:51.940703  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:51.979988  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:51.980021  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:52.033297  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:52.033338  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:52.047004  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:52.047039  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:52.126136  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:52.126163  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:52.126176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:54.711457  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:54.726419  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:54.726510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:54.773253  152982 cri.go:89] found id: ""
	I0826 12:13:54.773290  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.773304  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:54.773324  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:54.773397  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:54.812175  152982 cri.go:89] found id: ""
	I0826 12:13:54.812211  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.812232  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:54.812239  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:54.812298  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:54.848673  152982 cri.go:89] found id: ""
	I0826 12:13:54.848702  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.848710  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:54.848717  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:54.848782  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:54.884211  152982 cri.go:89] found id: ""
	I0826 12:13:54.884239  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.884252  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:54.884259  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:54.884329  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:54.925279  152982 cri.go:89] found id: ""
	I0826 12:13:54.925312  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.925323  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:54.925331  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:54.925406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:54.961004  152982 cri.go:89] found id: ""
	I0826 12:13:54.961035  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.961043  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:54.961050  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:54.961114  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:54.998689  152982 cri.go:89] found id: ""
	I0826 12:13:54.998720  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.998730  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:54.998737  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:54.998810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:55.033540  152982 cri.go:89] found id: ""
	I0826 12:13:55.033671  152982 logs.go:276] 0 containers: []
	W0826 12:13:55.033683  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:55.033696  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:55.033713  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:55.082966  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:55.083006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:55.096472  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:55.096503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:55.166868  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:55.166899  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:55.166917  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:55.260596  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:55.260637  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:55.206106  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.704214  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.253114  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.254749  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.122385  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.124183  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:00.622721  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.804727  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:57.818098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:57.818188  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:57.852777  152982 cri.go:89] found id: ""
	I0826 12:13:57.852819  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.852832  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:57.852841  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:57.852906  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:57.888778  152982 cri.go:89] found id: ""
	I0826 12:13:57.888815  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.888832  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:57.888840  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:57.888924  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:57.927398  152982 cri.go:89] found id: ""
	I0826 12:13:57.927432  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.927444  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:57.927452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:57.927527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:57.965373  152982 cri.go:89] found id: ""
	I0826 12:13:57.965402  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.965420  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:57.965425  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:57.965488  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:57.999334  152982 cri.go:89] found id: ""
	I0826 12:13:57.999366  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.999374  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:57.999380  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:57.999441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:58.035268  152982 cri.go:89] found id: ""
	I0826 12:13:58.035299  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.035308  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:58.035313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:58.035373  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:58.070055  152982 cri.go:89] found id: ""
	I0826 12:13:58.070088  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.070099  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:58.070107  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:58.070176  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:58.104845  152982 cri.go:89] found id: ""
	I0826 12:13:58.104882  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.104893  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:58.104906  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:58.104923  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:58.149392  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:58.149427  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:58.201310  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:58.201345  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:58.217027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:58.217067  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:58.301347  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.301372  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:58.301389  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:00.881924  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:00.897716  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:14:00.897804  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:14:00.934959  152982 cri.go:89] found id: ""
	I0826 12:14:00.934993  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.935005  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:14:00.935013  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:14:00.935086  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:14:00.969225  152982 cri.go:89] found id: ""
	I0826 12:14:00.969257  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.969266  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:14:00.969272  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:14:00.969344  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:14:01.004010  152982 cri.go:89] found id: ""
	I0826 12:14:01.004047  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.004057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:14:01.004063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:14:01.004136  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:14:01.039659  152982 cri.go:89] found id: ""
	I0826 12:14:01.039689  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.039697  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:14:01.039704  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:14:01.039758  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:14:01.073234  152982 cri.go:89] found id: ""
	I0826 12:14:01.073266  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.073278  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:14:01.073293  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:14:01.073370  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:14:01.111187  152982 cri.go:89] found id: ""
	I0826 12:14:01.111229  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.111243  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:14:01.111261  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:14:01.111331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:14:01.145754  152982 cri.go:89] found id: ""
	I0826 12:14:01.145791  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.145803  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:14:01.145811  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:14:01.145885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:14:01.182342  152982 cri.go:89] found id: ""
	I0826 12:14:01.182386  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.182398  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:14:01.182412  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:14:01.182434  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:01.266710  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:14:01.266754  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:14:01.305346  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:14:01.305385  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:14:01.356704  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:14:01.356745  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:14:01.370117  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:14:01.370149  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:14:01.440661  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.198044  152550 pod_ready.go:82] duration metric: took 4m0.000989551s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	E0826 12:13:58.198094  152550 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:13:58.198117  152550 pod_ready.go:39] duration metric: took 4m12.634931094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:13:58.198155  152550 kubeadm.go:597] duration metric: took 4m20.008849713s to restartPrimaryControlPlane
	W0826 12:13:58.198303  152550 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:13:58.198455  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:00.756478  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.253496  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.941691  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:03.956386  152982 kubeadm.go:597] duration metric: took 4m3.440941217s to restartPrimaryControlPlane
	W0826 12:14:03.956466  152982 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:03.956493  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:04.426489  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:04.441881  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:04.452877  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:04.463304  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:04.463332  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:04.463380  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:04.473208  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:04.473290  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:04.483666  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:04.494051  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:04.494177  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:04.504320  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.514099  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:04.514174  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.524235  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:04.533899  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:04.533984  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:04.544851  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:04.618397  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:14:04.618498  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:04.760383  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:04.760547  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:04.760690  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:14:04.953284  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:02.622852  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:05.122408  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:04.955371  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:04.955481  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:04.955563  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:04.955664  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:04.955738  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:04.955850  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:04.955953  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:04.956047  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:04.956133  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:04.956239  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:04.956306  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:04.956366  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:04.956455  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:05.401019  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:05.543601  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:05.641242  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:05.716524  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:05.737543  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:05.739428  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:05.739530  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:05.887203  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:05.889144  152982 out.go:235]   - Booting up control plane ...
	I0826 12:14:05.889288  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:05.891248  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:05.892518  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:05.894610  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:05.899134  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:14:05.753455  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.754033  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.622166  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:09.623006  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:10.253568  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.255058  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.122796  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.622774  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.753807  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.253632  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.254808  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.123304  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.622567  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.257450  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.752912  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.623069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.624561  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.253685  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.752880  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.122470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.623195  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:29.414342  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.215853526s)
	I0826 12:14:29.414450  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:29.436730  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:29.449421  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:29.462320  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:29.462349  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:29.462411  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:29.473119  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:29.473189  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:29.493795  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:29.516473  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:29.516563  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:29.528887  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.537934  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:29.538011  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.548384  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:29.557588  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:29.557659  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:29.567544  152550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:29.611274  152550 kubeadm.go:310] W0826 12:14:29.589660    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.612346  152550 kubeadm.go:310] W0826 12:14:29.590990    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.731352  152550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:14:30.755803  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.252679  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:31.123036  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.623654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:35.623993  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:38.120098  152550 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:14:38.120187  152550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:38.120283  152550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:38.120428  152550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:38.120548  152550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:14:38.120643  152550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:38.122417  152550 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:38.122519  152550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:38.122590  152550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:38.122681  152550 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:38.122766  152550 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:38.122884  152550 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:38.122960  152550 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:38.123047  152550 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:38.123146  152550 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:38.123242  152550 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:38.123316  152550 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:38.123350  152550 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:38.123394  152550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:38.123481  152550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:38.123531  152550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:14:38.123602  152550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:38.123656  152550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:38.123702  152550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:38.123770  152550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:38.123830  152550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:38.126005  152550 out.go:235]   - Booting up control plane ...
	I0826 12:14:38.126111  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:38.126209  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:38.126293  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:38.126433  152550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:38.126541  152550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:38.126619  152550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:38.126796  152550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:14:38.126975  152550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:14:38.127064  152550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001663066s
	I0826 12:14:38.127156  152550 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:14:38.127239  152550 kubeadm.go:310] [api-check] The API server is healthy after 4.502197821s
	I0826 12:14:38.127376  152550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:14:38.127527  152550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:14:38.127622  152550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:14:38.127799  152550 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-923586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:14:38.127882  152550 kubeadm.go:310] [bootstrap-token] Using token: uk5nes.r9l047sx2ciq7ja8
	I0826 12:14:38.129135  152550 out.go:235]   - Configuring RBAC rules ...
	I0826 12:14:38.129255  152550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:14:38.129363  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:14:38.129493  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:14:38.129668  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:14:38.129810  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:14:38.129908  152550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:14:38.130016  152550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:14:38.130071  152550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:14:38.130114  152550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:14:38.130120  152550 kubeadm.go:310] 
	I0826 12:14:38.130173  152550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:14:38.130178  152550 kubeadm.go:310] 
	I0826 12:14:38.130239  152550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:14:38.130249  152550 kubeadm.go:310] 
	I0826 12:14:38.130269  152550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:14:38.130340  152550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:14:38.130414  152550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:14:38.130424  152550 kubeadm.go:310] 
	I0826 12:14:38.130501  152550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:14:38.130515  152550 kubeadm.go:310] 
	I0826 12:14:38.130583  152550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:14:38.130595  152550 kubeadm.go:310] 
	I0826 12:14:38.130676  152550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:14:38.130774  152550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:14:38.130889  152550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:14:38.130898  152550 kubeadm.go:310] 
	I0826 12:14:38.130984  152550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:14:38.131067  152550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:14:38.131086  152550 kubeadm.go:310] 
	I0826 12:14:38.131158  152550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131276  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:14:38.131297  152550 kubeadm.go:310] 	--control-plane 
	I0826 12:14:38.131301  152550 kubeadm.go:310] 
	I0826 12:14:38.131407  152550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:14:38.131419  152550 kubeadm.go:310] 
	I0826 12:14:38.131518  152550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131634  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:14:38.131651  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:14:38.131664  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:14:38.133846  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:14:35.752863  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.752967  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.116222  153366 pod_ready.go:82] duration metric: took 4m0.000438014s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	E0826 12:14:37.116261  153366 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:14:37.116289  153366 pod_ready.go:39] duration metric: took 4m10.542468189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:37.116344  153366 kubeadm.go:597] duration metric: took 4m19.458712933s to restartPrimaryControlPlane
	W0826 12:14:37.116458  153366 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:37.116493  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:38.135291  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:14:38.146512  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:14:38.165564  152550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:14:38.165694  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.165744  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-923586 minikube.k8s.io/updated_at=2024_08_26T12_14_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=embed-certs-923586 minikube.k8s.io/primary=true
	I0826 12:14:38.409452  152550 ops.go:34] apiserver oom_adj: -16
	I0826 12:14:38.409559  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.910300  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.410434  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.909691  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.410601  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.910375  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.410502  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.909663  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.409954  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.515793  152550 kubeadm.go:1113] duration metric: took 4.350161994s to wait for elevateKubeSystemPrivileges
	I0826 12:14:42.515834  152550 kubeadm.go:394] duration metric: took 5m4.371327443s to StartCluster
	I0826 12:14:42.515878  152550 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.515970  152550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:14:42.517781  152550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.518064  152550 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:14:42.518189  152550 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:14:42.518281  152550 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-923586"
	I0826 12:14:42.518296  152550 addons.go:69] Setting default-storageclass=true in profile "embed-certs-923586"
	I0826 12:14:42.518309  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:14:42.518339  152550 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-923586"
	W0826 12:14:42.518352  152550 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:14:42.518362  152550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-923586"
	I0826 12:14:42.518383  152550 addons.go:69] Setting metrics-server=true in profile "embed-certs-923586"
	I0826 12:14:42.518405  152550 addons.go:234] Setting addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:42.518409  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	W0826 12:14:42.518418  152550 addons.go:243] addon metrics-server should already be in state true
	I0826 12:14:42.518446  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.518852  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518865  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518829  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518905  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.519968  152550 out.go:177] * Verifying Kubernetes components...
	I0826 12:14:42.521761  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:14:42.537559  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0826 12:14:42.538127  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.538827  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.538891  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.539336  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.539636  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.540538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0826 12:14:42.540644  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0826 12:14:42.541179  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541244  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541681  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541695  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.541834  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541842  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.542936  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.542979  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.543441  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543490  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543551  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543577  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543637  152550 addons.go:234] Setting addon default-storageclass=true in "embed-certs-923586"
	W0826 12:14:42.543663  152550 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:14:42.543700  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.544040  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.544067  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.561871  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0826 12:14:42.562432  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.562957  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.562971  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.563394  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.563689  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.565675  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.565857  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0826 12:14:42.565980  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0826 12:14:42.566268  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566352  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566799  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.566815  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567209  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567364  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.567386  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567775  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567779  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.567855  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.567903  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.568183  152550 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:14:42.569717  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.569832  152550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.569854  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:14:42.569876  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.571655  152550 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:14:42.572951  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.572975  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:14:42.572988  152550 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:14:42.573009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.573393  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.573434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.573818  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.574020  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.574160  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.574454  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.576356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.576762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.576782  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.577099  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.577293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.577430  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.577564  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.586538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0826 12:14:42.587087  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.587574  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.587590  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.587849  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.588001  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.589835  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.590061  152550 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.590075  152550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:14:42.590089  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.592573  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.592861  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.592952  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.593269  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.593437  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.593541  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.593637  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.772651  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:14:42.795921  152550 node_ready.go:35] waiting up to 6m0s for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831853  152550 node_ready.go:49] node "embed-certs-923586" has status "Ready":"True"
	I0826 12:14:42.831881  152550 node_ready.go:38] duration metric: took 35.920093ms for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831893  152550 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:42.856949  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:42.924562  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.940640  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:14:42.940669  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:14:42.958680  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.975446  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:14:42.975481  152550 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:14:43.037862  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:43.037891  152550 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:14:43.105738  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:44.054921  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130312138s)
	I0826 12:14:44.054995  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055025  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096305238s)
	I0826 12:14:44.055070  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055087  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055330  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055394  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055408  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055416  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055423  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055444  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055395  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055498  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055512  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055520  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055719  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055724  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055734  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055858  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055898  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055923  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.075068  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.075100  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.075404  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.075424  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478321  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.372540463s)
	I0826 12:14:44.478382  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478402  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.478806  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.478864  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.478876  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478891  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478904  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.479161  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.479161  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.479189  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.479205  152550 addons.go:475] Verifying addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:44.482190  152550 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:14:40.254480  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:42.753499  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:45.900198  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:14:45.901204  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:45.901550  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:44.483577  152550 addons.go:510] duration metric: took 1.965385921s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:14:44.876221  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:44.876253  152550 pod_ready.go:82] duration metric: took 2.019275302s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.876270  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883514  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.883542  152550 pod_ready.go:82] duration metric: took 1.007263784s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883553  152550 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890724  152550 pod_ready.go:93] pod "etcd-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.890750  152550 pod_ready.go:82] duration metric: took 7.190212ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890760  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.754815  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.252702  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:49.254411  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.897138  152550 pod_ready.go:103] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:48.897502  152550 pod_ready.go:93] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:48.897529  152550 pod_ready.go:82] duration metric: took 3.006762275s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:48.897541  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905832  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.905858  152550 pod_ready.go:82] duration metric: took 2.008310051s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905870  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912639  152550 pod_ready.go:93] pod "kube-proxy-xnv2b" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.912679  152550 pod_ready.go:82] duration metric: took 6.793285ms for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912694  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918794  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.918819  152550 pod_ready.go:82] duration metric: took 6.117525ms for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918826  152550 pod_ready.go:39] duration metric: took 8.086922463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:50.918867  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:14:50.918928  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:50.936095  152550 api_server.go:72] duration metric: took 8.41799252s to wait for apiserver process to appear ...
	I0826 12:14:50.936126  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:14:50.936155  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:14:50.941142  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:14:50.942612  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:14:50.942653  152550 api_server.go:131] duration metric: took 6.519342ms to wait for apiserver health ...
	I0826 12:14:50.942664  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:14:50.947646  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:14:50.947675  152550 system_pods.go:61] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:50.947680  152550 system_pods.go:61] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:50.947684  152550 system_pods.go:61] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:50.947688  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:50.947691  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:50.947694  152550 system_pods.go:61] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:50.947699  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:50.947705  152550 system_pods.go:61] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:50.947709  152550 system_pods.go:61] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:50.947717  152550 system_pods.go:74] duration metric: took 5.046771ms to wait for pod list to return data ...
	I0826 12:14:50.947723  152550 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:14:50.950716  152550 default_sa.go:45] found service account: "default"
	I0826 12:14:50.950744  152550 default_sa.go:55] duration metric: took 3.014513ms for default service account to be created ...
	I0826 12:14:50.950756  152550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:14:51.063812  152550 system_pods.go:86] 9 kube-system pods found
	I0826 12:14:51.063849  152550 system_pods.go:89] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:51.063858  152550 system_pods.go:89] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:51.063864  152550 system_pods.go:89] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:51.063869  152550 system_pods.go:89] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:51.063875  152550 system_pods.go:89] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:51.063880  152550 system_pods.go:89] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:51.063886  152550 system_pods.go:89] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:51.063894  152550 system_pods.go:89] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:51.063901  152550 system_pods.go:89] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:51.063914  152550 system_pods.go:126] duration metric: took 113.151196ms to wait for k8s-apps to be running ...
	I0826 12:14:51.063925  152550 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:14:51.063978  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:51.079783  152550 system_svc.go:56] duration metric: took 15.845401ms WaitForService to wait for kubelet
	I0826 12:14:51.079821  152550 kubeadm.go:582] duration metric: took 8.56172531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:14:51.079848  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:14:51.262166  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:14:51.262194  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:14:51.262233  152550 node_conditions.go:105] duration metric: took 182.377973ms to run NodePressure ...
	I0826 12:14:51.262248  152550 start.go:241] waiting for startup goroutines ...
	I0826 12:14:51.262258  152550 start.go:246] waiting for cluster config update ...
	I0826 12:14:51.262272  152550 start.go:255] writing updated cluster config ...
	I0826 12:14:51.262587  152550 ssh_runner.go:195] Run: rm -f paused
	I0826 12:14:51.317881  152550 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:14:51.319950  152550 out.go:177] * Done! kubectl is now configured to use "embed-certs-923586" cluster and "default" namespace by default
	I0826 12:14:50.901903  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:50.902179  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:51.256756  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:53.755801  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:56.253848  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:58.254315  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:00.902494  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:00.902754  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:03.257214  153366 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140694693s)
	I0826 12:15:03.257298  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:03.273530  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:03.284370  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:03.294199  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:03.294221  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:03.294270  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:15:03.303856  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:03.303938  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:03.313935  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:15:03.323395  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:03.323477  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:03.333728  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.343369  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:03.343452  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.353456  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:15:03.363384  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:03.363472  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:03.373738  153366 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:03.422068  153366 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:03.422173  153366 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:03.535516  153366 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:03.535649  153366 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:03.535775  153366 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:03.550873  153366 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:03.552861  153366 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:03.552969  153366 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:03.553038  153366 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:03.553138  153366 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:03.553218  153366 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:03.553319  153366 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:03.553385  153366 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:03.553462  153366 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:03.553536  153366 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:03.553674  153366 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:03.553810  153366 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:03.553854  153366 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:03.553906  153366 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:03.650986  153366 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:03.737989  153366 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:03.981919  153366 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:04.322809  153366 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:04.378495  153366 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:04.379108  153366 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:04.382061  153366 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:00.753091  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:02.753181  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:04.384093  153366 out.go:235]   - Booting up control plane ...
	I0826 12:15:04.384215  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:04.384313  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:04.384401  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:04.405533  153366 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:04.411925  153366 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:04.411998  153366 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:04.548438  153366 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:04.548626  153366 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:05.049451  153366 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.31099ms
	I0826 12:15:05.049526  153366 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:05.253970  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:07.753555  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.051568  153366 kubeadm.go:310] [api-check] The API server is healthy after 5.001973036s
	I0826 12:15:10.066691  153366 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:10.086381  153366 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:10.122144  153366 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:10.122349  153366 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-697869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:10.138374  153366 kubeadm.go:310] [bootstrap-token] Using token: amrfa7.mjk6u0x9vle6unng
	I0826 12:15:10.139885  153366 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:10.140032  153366 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:10.156541  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:10.167826  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:10.174587  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:10.179100  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:10.191798  153366 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:10.465168  153366 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:10.905160  153366 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:11.461111  153366 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:11.461144  153366 kubeadm.go:310] 
	I0826 12:15:11.461234  153366 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:11.461246  153366 kubeadm.go:310] 
	I0826 12:15:11.461381  153366 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:11.461404  153366 kubeadm.go:310] 
	I0826 12:15:11.461439  153366 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:11.461530  153366 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:11.461655  153366 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:11.461667  153366 kubeadm.go:310] 
	I0826 12:15:11.461761  153366 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:11.461776  153366 kubeadm.go:310] 
	I0826 12:15:11.461841  153366 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:11.461855  153366 kubeadm.go:310] 
	I0826 12:15:11.461951  153366 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:11.462070  153366 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:11.462171  153366 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:11.462181  153366 kubeadm.go:310] 
	I0826 12:15:11.462305  153366 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:11.462432  153366 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:11.462443  153366 kubeadm.go:310] 
	I0826 12:15:11.462557  153366 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.462694  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:11.462729  153366 kubeadm.go:310] 	--control-plane 
	I0826 12:15:11.462742  153366 kubeadm.go:310] 
	I0826 12:15:11.462862  153366 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:11.462879  153366 kubeadm.go:310] 
	I0826 12:15:11.463004  153366 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.463151  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:11.463695  153366 kubeadm.go:310] W0826 12:15:03.397375    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464127  153366 kubeadm.go:310] W0826 12:15:03.398283    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464277  153366 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:11.464314  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:15:11.464324  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:11.467369  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:09.754135  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.247470  152463 pod_ready.go:82] duration metric: took 4m0.000930829s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	E0826 12:15:10.247510  152463 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:15:10.247531  152463 pod_ready.go:39] duration metric: took 4m13.959337221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:10.247571  152463 kubeadm.go:597] duration metric: took 4m20.649627423s to restartPrimaryControlPlane
	W0826 12:15:10.247641  152463 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:15:10.247671  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:15:11.468809  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:11.480030  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:11.503412  153366 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:11.503518  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:11.503558  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-697869 minikube.k8s.io/updated_at=2024_08_26T12_15_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=default-k8s-diff-port-697869 minikube.k8s.io/primary=true
	I0826 12:15:11.724406  153366 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:11.724524  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.225088  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.725598  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.225161  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.724619  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.225467  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.724756  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.224733  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.724555  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.869377  153366 kubeadm.go:1113] duration metric: took 4.365927713s to wait for elevateKubeSystemPrivileges
	I0826 12:15:15.869426  153366 kubeadm.go:394] duration metric: took 4m58.261516694s to StartCluster
	I0826 12:15:15.869450  153366 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.869547  153366 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:15.872248  153366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.872615  153366 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:15.872724  153366 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:15.872819  153366 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872837  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:15.872839  153366 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872858  153366 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872872  153366 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:15.872887  153366 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872908  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872919  153366 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872927  153366 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:15.872959  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872890  153366 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-697869"
	I0826 12:15:15.873361  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873403  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873418  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873465  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.874128  153366 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:15.875341  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:15.894326  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0826 12:15:15.894578  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0826 12:15:15.895050  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895104  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38885
	I0826 12:15:15.895131  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895609  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895629  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895612  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895658  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895696  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.896010  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896059  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896145  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.896164  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.896261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.896493  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896650  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.896675  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.896977  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.897022  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.899881  153366 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.899904  153366 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:15.899935  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.900218  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.900255  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.914959  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0826 12:15:15.915525  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.915993  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.916017  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.916418  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.916451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0826 12:15:15.916588  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.916681  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0826 12:15:15.916999  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.917629  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.917643  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.918129  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.918298  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.918337  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.919305  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.919920  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.919947  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.920096  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.920226  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.920281  153366 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:15.920702  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.920724  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.921464  153366 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:15.921468  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:15.921554  153366 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:15.921575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.923028  153366 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:15.923051  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:15.923072  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.926224  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926877  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926895  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926900  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.927101  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927141  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927313  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927329  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927509  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927677  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.927774  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.945639  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0826 12:15:15.946164  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.946704  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.946726  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.947148  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.947420  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.949257  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.949524  153366 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:15.949544  153366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:15.949573  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.952861  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953407  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.953440  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953604  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.953816  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.953971  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.954108  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:16.119775  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:16.141629  153366 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167775  153366 node_ready.go:49] node "default-k8s-diff-port-697869" has status "Ready":"True"
	I0826 12:15:16.167813  153366 node_ready.go:38] duration metric: took 26.141251ms for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167823  153366 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:16.174824  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:16.265371  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:16.273443  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:16.273479  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:16.295175  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:16.301027  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:16.301063  153366 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:16.351346  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:16.351372  153366 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:16.536263  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:17.254787  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254820  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.254872  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254896  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255317  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255371  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255394  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255396  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255397  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255354  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255412  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255447  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255425  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255497  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255721  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255735  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255839  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255860  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255883  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.279566  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.279589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.279893  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.279914  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792266  153366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255954534s)
	I0826 12:15:17.792329  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792341  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792687  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.792714  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792727  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792737  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792693  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.793052  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.793070  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.793083  153366 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-697869"
	I0826 12:15:17.795156  153366 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:15:17.796583  153366 addons.go:510] duration metric: took 1.923858399s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:15:18.183088  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.682427  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.903394  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:20.903620  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:21.684011  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.684037  153366 pod_ready.go:82] duration metric: took 5.509158352s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.684047  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689145  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.689170  153366 pod_ready.go:82] duration metric: took 5.117406ms for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689180  153366 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695856  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.695897  153366 pod_ready.go:82] duration metric: took 2.006709056s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695912  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700548  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.700572  153366 pod_ready.go:82] duration metric: took 4.650988ms for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700583  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705425  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.705449  153366 pod_ready.go:82] duration metric: took 4.857442ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705461  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710336  153366 pod_ready.go:93] pod "kube-proxy-fkklg" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.710368  153366 pod_ready.go:82] duration metric: took 4.897388ms for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710380  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079760  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:24.079791  153366 pod_ready.go:82] duration metric: took 369.402007ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079803  153366 pod_ready.go:39] duration metric: took 7.911968599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:24.079826  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:24.079905  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:24.096351  153366 api_server.go:72] duration metric: took 8.22368917s to wait for apiserver process to appear ...
	I0826 12:15:24.096380  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:24.096401  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:15:24.100636  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:15:24.102197  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:24.102228  153366 api_server.go:131] duration metric: took 5.839895ms to wait for apiserver health ...
	I0826 12:15:24.102239  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:24.282080  153366 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:24.282111  153366 system_pods.go:61] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.282116  153366 system_pods.go:61] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.282120  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.282124  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.282128  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.282131  153366 system_pods.go:61] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.282134  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.282141  153366 system_pods.go:61] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.282148  153366 system_pods.go:61] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.282160  153366 system_pods.go:74] duration metric: took 179.913782ms to wait for pod list to return data ...
	I0826 12:15:24.282174  153366 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:24.478697  153366 default_sa.go:45] found service account: "default"
	I0826 12:15:24.478725  153366 default_sa.go:55] duration metric: took 196.543227ms for default service account to be created ...
	I0826 12:15:24.478735  153366 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:24.681990  153366 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:24.682024  153366 system_pods.go:89] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.682033  153366 system_pods.go:89] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.682039  153366 system_pods.go:89] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.682047  153366 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.682053  153366 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.682059  153366 system_pods.go:89] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.682064  153366 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.682074  153366 system_pods.go:89] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.682084  153366 system_pods.go:89] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.682099  153366 system_pods.go:126] duration metric: took 203.358223ms to wait for k8s-apps to be running ...
	I0826 12:15:24.682112  153366 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:24.682176  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:24.696733  153366 system_svc.go:56] duration metric: took 14.61027ms WaitForService to wait for kubelet
	I0826 12:15:24.696763  153366 kubeadm.go:582] duration metric: took 8.824109304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:24.696783  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:24.879924  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:24.879956  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:24.879966  153366 node_conditions.go:105] duration metric: took 183.178992ms to run NodePressure ...
	I0826 12:15:24.879990  153366 start.go:241] waiting for startup goroutines ...
	I0826 12:15:24.879997  153366 start.go:246] waiting for cluster config update ...
	I0826 12:15:24.880010  153366 start.go:255] writing updated cluster config ...
	I0826 12:15:24.880311  153366 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:24.930941  153366 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:24.933196  153366 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-697869" cluster and "default" namespace by default
	I0826 12:15:36.323870  152463 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.076163509s)
	I0826 12:15:36.323965  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:36.347973  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:36.368968  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:36.382879  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:36.382903  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:36.382963  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:15:36.416659  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:36.416743  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:36.429514  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:15:36.451301  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:36.451385  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:36.462051  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.472004  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:36.472067  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.482273  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:15:36.492841  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:36.492912  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:36.504817  152463 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:36.551754  152463 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:36.551829  152463 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:36.672687  152463 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:36.672864  152463 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:36.672989  152463 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:36.683235  152463 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:36.685324  152463 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:36.685440  152463 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:36.685547  152463 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:36.685629  152463 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:36.685682  152463 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:36.685739  152463 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:36.685783  152463 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:36.685831  152463 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:36.686022  152463 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:36.686468  152463 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:36.686945  152463 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:36.687303  152463 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:36.687378  152463 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:36.967134  152463 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:37.077904  152463 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:37.371185  152463 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:37.555065  152463 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:37.634464  152463 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:37.634927  152463 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:37.638560  152463 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:37.640588  152463 out.go:235]   - Booting up control plane ...
	I0826 12:15:37.640726  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:37.640832  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:37.642937  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:37.662774  152463 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:37.672492  152463 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:37.672548  152463 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:37.813958  152463 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:37.814108  152463 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:38.316718  152463 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.741081ms
	I0826 12:15:38.316861  152463 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:43.318178  152463 kubeadm.go:310] [api-check] The API server is healthy after 5.001355764s
	I0826 12:15:43.331536  152463 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:43.349535  152463 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:43.387824  152463 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:43.388114  152463 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-956479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:43.405027  152463 kubeadm.go:310] [bootstrap-token] Using token: ukbhjp.blg8kbhpg1wwmixs
	I0826 12:15:43.406880  152463 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:43.407022  152463 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:43.422870  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:43.436842  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:43.444123  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:43.454773  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:43.467173  152463 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:43.727266  152463 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:44.155916  152463 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:44.726922  152463 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:44.727276  152463 kubeadm.go:310] 
	I0826 12:15:44.727355  152463 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:44.727366  152463 kubeadm.go:310] 
	I0826 12:15:44.727452  152463 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:44.727461  152463 kubeadm.go:310] 
	I0826 12:15:44.727501  152463 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:44.727596  152463 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:44.727678  152463 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:44.727692  152463 kubeadm.go:310] 
	I0826 12:15:44.727778  152463 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:44.727803  152463 kubeadm.go:310] 
	I0826 12:15:44.727880  152463 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:44.727890  152463 kubeadm.go:310] 
	I0826 12:15:44.727958  152463 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:44.728059  152463 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:44.728157  152463 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:44.728170  152463 kubeadm.go:310] 
	I0826 12:15:44.728278  152463 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:44.728381  152463 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:44.728390  152463 kubeadm.go:310] 
	I0826 12:15:44.728500  152463 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.728621  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:44.728650  152463 kubeadm.go:310] 	--control-plane 
	I0826 12:15:44.728655  152463 kubeadm.go:310] 
	I0826 12:15:44.728763  152463 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:44.728773  152463 kubeadm.go:310] 
	I0826 12:15:44.728879  152463 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.729000  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:44.730448  152463 kubeadm.go:310] W0826 12:15:36.526674    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730826  152463 kubeadm.go:310] W0826 12:15:36.527559    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730958  152463 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:44.730985  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:15:44.731006  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:44.732918  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:44.734123  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:44.746466  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:44.766371  152463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:44.766444  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:44.766500  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-956479 minikube.k8s.io/updated_at=2024_08_26T12_15_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=no-preload-956479 minikube.k8s.io/primary=true
	I0826 12:15:44.816160  152463 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:44.979504  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.479661  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.980448  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.479729  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.980060  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.479789  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.980142  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.479669  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.567890  152463 kubeadm.go:1113] duration metric: took 3.801513957s to wait for elevateKubeSystemPrivileges
	I0826 12:15:48.567928  152463 kubeadm.go:394] duration metric: took 4m59.024259276s to StartCluster
	I0826 12:15:48.567954  152463 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.568058  152463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:48.569638  152463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.569928  152463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:48.570009  152463 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:48.570072  152463 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956479"
	I0826 12:15:48.570106  152463 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956479"
	W0826 12:15:48.570120  152463 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:48.570111  152463 addons.go:69] Setting default-storageclass=true in profile "no-preload-956479"
	I0826 12:15:48.570136  152463 addons.go:69] Setting metrics-server=true in profile "no-preload-956479"
	I0826 12:15:48.570154  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570164  152463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956479"
	I0826 12:15:48.570168  152463 addons.go:234] Setting addon metrics-server=true in "no-preload-956479"
	W0826 12:15:48.570179  152463 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:48.570189  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:48.570209  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570485  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570551  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570575  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570609  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570621  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570654  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.572265  152463 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:48.573970  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:48.587085  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0826 12:15:48.587132  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0826 12:15:48.587291  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0826 12:15:48.587551  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.587597  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588312  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588331  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588376  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588491  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588509  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588696  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588878  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588965  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588978  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.589237  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589273  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589402  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589427  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589780  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.590142  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.593429  152463 addons.go:234] Setting addon default-storageclass=true in "no-preload-956479"
	W0826 12:15:48.593450  152463 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:48.593479  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.593765  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.593796  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.606920  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0826 12:15:48.607123  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0826 12:15:48.607641  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.607775  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.608233  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608253  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608389  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608401  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608881  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609068  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.609126  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609286  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.611449  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0826 12:15:48.611638  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612161  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612164  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.612932  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.612954  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.613327  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.613815  152463 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:48.614020  152463 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:48.614913  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.614969  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.615993  152463 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:48.616019  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:48.616035  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.616812  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:48.616831  152463 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:48.616854  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.619999  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.620553  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.620591  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.621629  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.621699  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621845  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.621868  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621914  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622126  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.622296  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.622459  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622662  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.622728  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.633310  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0826 12:15:48.633834  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.634438  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.634492  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.634892  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.635131  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.636967  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.637184  152463 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.637204  152463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:48.637225  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.640306  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.640677  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.640710  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.641042  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.641260  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.641483  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.641743  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.771258  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:48.788808  152463 node_ready.go:35] waiting up to 6m0s for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800881  152463 node_ready.go:49] node "no-preload-956479" has status "Ready":"True"
	I0826 12:15:48.800916  152463 node_ready.go:38] duration metric: took 12.068483ms for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800926  152463 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:48.806760  152463 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:48.859878  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:48.859902  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:48.863874  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.884910  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:48.884940  152463 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:48.905108  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.905139  152463 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:48.929466  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.968025  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:49.143607  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.143634  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.143980  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.144039  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144048  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144056  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.144063  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.144396  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144421  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144399  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177127  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.177157  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.177586  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177590  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.177610  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170421  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240899569s)
	I0826 12:15:50.170493  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170509  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.170879  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.170896  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.170919  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170934  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170947  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.171212  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.171232  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.171278  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.171294  152463 addons.go:475] Verifying addon metrics-server=true in "no-preload-956479"
	I0826 12:15:50.240347  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.272272683s)
	I0826 12:15:50.240403  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240416  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.240837  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.240861  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.240867  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.240871  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240906  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.241192  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.241208  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.243352  152463 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0826 12:15:50.244857  152463 addons.go:510] duration metric: took 1.674848626s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0826 12:15:50.821689  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:53.313148  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:54.313605  152463 pod_ready.go:93] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:54.313634  152463 pod_ready.go:82] duration metric: took 5.506845108s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:54.313646  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.320782  152463 pod_ready.go:103] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:56.822596  152463 pod_ready.go:93] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.822626  152463 pod_ready.go:82] duration metric: took 2.508972184s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.822652  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829472  152463 pod_ready.go:93] pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.829497  152463 pod_ready.go:82] duration metric: took 6.836827ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829508  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835063  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.835087  152463 pod_ready.go:82] duration metric: took 5.573211ms for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835095  152463 pod_ready.go:39] duration metric: took 8.03415934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:56.835111  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:56.835162  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:56.852565  152463 api_server.go:72] duration metric: took 8.282599518s to wait for apiserver process to appear ...
	I0826 12:15:56.852595  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:56.852614  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:15:56.857431  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:15:56.858525  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:56.858548  152463 api_server.go:131] duration metric: took 5.945927ms to wait for apiserver health ...
	I0826 12:15:56.858556  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:56.863726  152463 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:56.863750  152463 system_pods.go:61] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.863757  152463 system_pods.go:61] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.863762  152463 system_pods.go:61] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.863768  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.863773  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.863776  152463 system_pods.go:61] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.863780  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.863784  152463 system_pods.go:61] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.863788  152463 system_pods.go:61] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.863794  152463 system_pods.go:74] duration metric: took 5.233096ms to wait for pod list to return data ...
	I0826 12:15:56.863801  152463 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:56.866245  152463 default_sa.go:45] found service account: "default"
	I0826 12:15:56.866263  152463 default_sa.go:55] duration metric: took 2.456594ms for default service account to be created ...
	I0826 12:15:56.866270  152463 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:56.870592  152463 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:56.870614  152463 system_pods.go:89] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.870621  152463 system_pods.go:89] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.870626  152463 system_pods.go:89] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.870634  152463 system_pods.go:89] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.870640  152463 system_pods.go:89] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.870645  152463 system_pods.go:89] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.870656  152463 system_pods.go:89] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.870663  152463 system_pods.go:89] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.870673  152463 system_pods.go:89] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.870681  152463 system_pods.go:126] duration metric: took 4.405758ms to wait for k8s-apps to be running ...
	I0826 12:15:56.870688  152463 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:56.870736  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:56.886533  152463 system_svc.go:56] duration metric: took 15.833026ms WaitForService to wait for kubelet
	I0826 12:15:56.886582  152463 kubeadm.go:582] duration metric: took 8.316620619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:56.886607  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:56.895864  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:56.895902  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:56.895917  152463 node_conditions.go:105] duration metric: took 9.302123ms to run NodePressure ...
	I0826 12:15:56.895934  152463 start.go:241] waiting for startup goroutines ...
	I0826 12:15:56.895945  152463 start.go:246] waiting for cluster config update ...
	I0826 12:15:56.895960  152463 start.go:255] writing updated cluster config ...
	I0826 12:15:56.896336  152463 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:56.947198  152463 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:56.949119  152463 out.go:177] * Done! kubectl is now configured to use "no-preload-956479" cluster and "default" namespace by default
	I0826 12:16:00.905372  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:00.905692  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:00.905720  152982 kubeadm.go:310] 
	I0826 12:16:00.905753  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:16:00.905784  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:16:00.905791  152982 kubeadm.go:310] 
	I0826 12:16:00.905819  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:16:00.905877  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:16:00.906033  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:16:00.906050  152982 kubeadm.go:310] 
	I0826 12:16:00.906190  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:16:00.906257  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:16:00.906304  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:16:00.906311  152982 kubeadm.go:310] 
	I0826 12:16:00.906444  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:16:00.906687  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:16:00.906700  152982 kubeadm.go:310] 
	I0826 12:16:00.906794  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:16:00.906945  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:16:00.907050  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:16:00.907167  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:16:00.907184  152982 kubeadm.go:310] 
	I0826 12:16:00.907768  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:16:00.907869  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:16:00.907959  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 12:16:00.908103  152982 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 12:16:00.908168  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:16:01.392633  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:16:01.408303  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:16:01.419069  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:16:01.419104  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:16:01.419162  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:16:01.429440  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:16:01.429513  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:16:01.440092  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:16:01.450451  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:16:01.450528  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:16:01.461166  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.472084  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:16:01.472155  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.482791  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:16:01.493636  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:16:01.493737  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:16:01.504679  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:16:01.576700  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:16:01.576854  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:16:01.728501  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:16:01.728682  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:16:01.728846  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:16:01.928072  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:16:01.929877  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:16:01.929988  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:16:01.930128  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:16:01.930271  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:16:01.930373  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:16:01.930484  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:16:01.930593  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:16:01.930680  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:16:01.930766  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:16:01.931012  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:16:01.931363  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:16:01.931414  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:16:01.931593  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:16:02.054133  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:16:02.301995  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:16:02.372665  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:16:02.823940  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:16:02.844516  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:16:02.844641  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:16:02.844724  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:16:02.995838  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:16:02.997571  152982 out.go:235]   - Booting up control plane ...
	I0826 12:16:02.997707  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:16:02.999055  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:16:03.000691  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:16:03.010427  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:16:03.013494  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:16:43.016147  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:16:43.016271  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:43.016481  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:48.016709  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:48.016976  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:58.017776  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:58.018006  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:18.018369  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:18.018592  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.017759  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:58.018053  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.018084  152982 kubeadm.go:310] 
	I0826 12:17:58.018121  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:17:58.018157  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:17:58.018163  152982 kubeadm.go:310] 
	I0826 12:17:58.018192  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:17:58.018224  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:17:58.018310  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:17:58.018337  152982 kubeadm.go:310] 
	I0826 12:17:58.018477  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:17:58.018537  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:17:58.018619  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:17:58.018633  152982 kubeadm.go:310] 
	I0826 12:17:58.018723  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:17:58.018810  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:17:58.018820  152982 kubeadm.go:310] 
	I0826 12:17:58.019007  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:17:58.019157  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:17:58.019291  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:17:58.019403  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:17:58.019414  152982 kubeadm.go:310] 
	I0826 12:17:58.020426  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:17:58.020541  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:17:58.020627  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 12:17:58.020705  152982 kubeadm.go:394] duration metric: took 7m57.559327665s to StartCluster
	I0826 12:17:58.020799  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:17:58.020875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:17:58.061950  152982 cri.go:89] found id: ""
	I0826 12:17:58.061979  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.061989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:17:58.061998  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:17:58.062057  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:17:58.100419  152982 cri.go:89] found id: ""
	I0826 12:17:58.100451  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.100465  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:17:58.100474  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:17:58.100536  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:17:58.135329  152982 cri.go:89] found id: ""
	I0826 12:17:58.135360  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.135369  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:17:58.135378  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:17:58.135472  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:17:58.169826  152982 cri.go:89] found id: ""
	I0826 12:17:58.169858  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.169870  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:17:58.169888  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:17:58.169958  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:17:58.204549  152982 cri.go:89] found id: ""
	I0826 12:17:58.204583  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.204593  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:17:58.204600  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:17:58.204668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:17:58.241886  152982 cri.go:89] found id: ""
	I0826 12:17:58.241917  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.241926  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:17:58.241933  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:17:58.241997  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:17:58.276159  152982 cri.go:89] found id: ""
	I0826 12:17:58.276194  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.276206  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:17:58.276220  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:17:58.276288  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:17:58.311319  152982 cri.go:89] found id: ""
	I0826 12:17:58.311352  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.311364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:17:58.311377  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:17:58.311394  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:17:58.365300  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:17:58.365352  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:17:58.378933  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:17:58.378972  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:17:58.464890  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:17:58.464920  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:17:58.464939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:17:58.581032  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:17:58.581076  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0826 12:17:58.633835  152982 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 12:17:58.633919  152982 out.go:270] * 
	W0826 12:17:58.634025  152982 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.634049  152982 out.go:270] * 
	W0826 12:17:58.635201  152982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:17:58.639004  152982 out.go:201] 
	W0826 12:17:58.640230  152982 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.640308  152982 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 12:17:58.640327  152982 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 12:17:58.641876  152982 out.go:201] 
	
	
	==> CRI-O <==
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.606020106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674680605990078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dd56bd6-7f58-40cd-a76d-d9373b0785ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.606892911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a65fc9ec-e733-4921-8ad8-9c3d4435842c name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.607048960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a65fc9ec-e733-4921-8ad8-9c3d4435842c name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.607141682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a65fc9ec-e733-4921-8ad8-9c3d4435842c name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.638323468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9512baa1-a9dc-4447-a6d6-42d783ed57f4 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.638394168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9512baa1-a9dc-4447-a6d6-42d783ed57f4 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.641015907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa42fc4a-ced6-4194-85fe-f1196777fd4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.641394571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674680641372625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa42fc4a-ced6-4194-85fe-f1196777fd4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.642057630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b750d0b2-a500-4c75-988b-ff09d436c40a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.642124400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b750d0b2-a500-4c75-988b-ff09d436c40a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.642160354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b750d0b2-a500-4c75-988b-ff09d436c40a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.678250923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e733de38-65e1-4087-8bb8-24640f8ae969 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.678334233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e733de38-65e1-4087-8bb8-24640f8ae969 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.679494724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1fb2972-b16d-47eb-9293-26f0899362cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.679905922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674680679883252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1fb2972-b16d-47eb-9293-26f0899362cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.680459173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c02445cb-3cd2-4e24-a9bf-1fc81df2000e name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.680513197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c02445cb-3cd2-4e24-a9bf-1fc81df2000e name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.680545642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c02445cb-3cd2-4e24-a9bf-1fc81df2000e name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.713378236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d8c74ec-ac56-4ab7-b96f-6d0769b8de71 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.713506501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d8c74ec-ac56-4ab7-b96f-6d0769b8de71 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.718679802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f8e9514-f7bd-4676-89f1-e110cb4d1f5d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.719060850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674680719039403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f8e9514-f7bd-4676-89f1-e110cb4d1f5d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.719720836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f8c73d7-7001-4a2e-8354-d2765d68aa05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.719772268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f8c73d7-7001-4a2e-8354-d2765d68aa05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:18:00 old-k8s-version-839656 crio[650]: time="2024-08-26 12:18:00.719803160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0f8c73d7-7001-4a2e-8354-d2765d68aa05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug26 12:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052898] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039892] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.851891] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935402] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.449604] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.385904] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.067684] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067976] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.189122] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.154809] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.263872] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.466854] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.059639] kauditd_printk_skb: 130 callbacks suppressed
	[Aug26 12:10] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	[ +12.058589] kauditd_printk_skb: 46 callbacks suppressed
	[Aug26 12:14] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Aug26 12:16] systemd-fstab-generator[5304]: Ignoring "noauto" option for root device
	[  +0.068224] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:18:00 up 8 min,  0 users,  load average: 0.02, 0.10, 0.07
	Linux old-k8s-version-839656 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc0000d8ee0, 0x4f04d00, 0xc000873750)
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00090cef0)
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009cbef0, 0x4f0ac20, 0xc000313360, 0x1, 0xc0001020c0)
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d8ee0, 0xc0001020c0)
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0002774c0, 0xc000133ae0)
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 26 12:17:57 old-k8s-version-839656 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 26 12:17:58 old-k8s-version-839656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 26 12:17:58 old-k8s-version-839656 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 26 12:17:58 old-k8s-version-839656 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 26 12:17:58 old-k8s-version-839656 kubelet[5549]: I0826 12:17:58.622227    5549 server.go:416] Version: v1.20.0
	Aug 26 12:17:58 old-k8s-version-839656 kubelet[5549]: I0826 12:17:58.622552    5549 server.go:837] Client rotation is on, will bootstrap in background
	Aug 26 12:17:58 old-k8s-version-839656 kubelet[5549]: I0826 12:17:58.624690    5549 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 26 12:17:58 old-k8s-version-839656 kubelet[5549]: I0826 12:17:58.626210    5549 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 26 12:17:58 old-k8s-version-839656 kubelet[5549]: W0826 12:17:58.626287    5549 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 2 (239.475467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-839656" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (740.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869: exit status 3 (3.168089414s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:06:46.543251  153241 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host
	E0826 12:06:46.543283  153241 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-697869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-697869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152159797s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-697869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869: exit status 3 (3.063248877s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0826 12:06:55.759327  153320 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host
	E0826 12:06:55.759351  153320 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-697869" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-923586 -n embed-certs-923586
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-26 12:23:51.887855963 +0000 UTC m=+5837.213521698
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-923586 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-923586 logs -n 25: (2.173785631s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-585941                                        | pause-585941                 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:01 UTC | 26 Aug 24 12:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956479             | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-923586            | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148783 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	|         | disable-driver-mounts-148783                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:04 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-839656        | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-697869  | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956479                  | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-923586                 | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-839656             | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697869       | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC | 26 Aug 24 12:15 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:06:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:06:55.804794  153366 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:06:55.805114  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805125  153366 out.go:358] Setting ErrFile to fd 2...
	I0826 12:06:55.805129  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805378  153366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:06:55.806009  153366 out.go:352] Setting JSON to false
	I0826 12:06:55.806989  153366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6557,"bootTime":1724667459,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:06:55.807056  153366 start.go:139] virtualization: kvm guest
	I0826 12:06:55.809200  153366 out.go:177] * [default-k8s-diff-port-697869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:06:55.810757  153366 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:06:55.810779  153366 notify.go:220] Checking for updates...
	I0826 12:06:55.813352  153366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:06:55.814876  153366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:06:55.816231  153366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:06:55.817536  153366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:06:55.819049  153366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:06:55.820974  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:06:55.821368  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.821428  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.837973  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0826 12:06:55.838484  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.839113  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.839132  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.839537  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.839758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.840059  153366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:06:55.840392  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.840446  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.855990  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0826 12:06:55.856535  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.857044  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.857070  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.857398  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.857606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.892165  153366 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:06:55.893462  153366 start.go:297] selected driver: kvm2
	I0826 12:06:55.893491  153366 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.893612  153366 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:06:55.894295  153366 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.894372  153366 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:06:55.911403  153366 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:06:55.911782  153366 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:06:55.911825  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:06:55.911833  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:06:55.911942  153366 start.go:340] cluster config:
	{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.912047  153366 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.914819  153366 out.go:177] * Starting "default-k8s-diff-port-697869" primary control-plane node in "default-k8s-diff-port-697869" cluster
	I0826 12:06:58.095139  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:06:55.916120  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:06:55.916158  153366 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:06:55.916168  153366 cache.go:56] Caching tarball of preloaded images
	I0826 12:06:55.916249  153366 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:06:55.916260  153366 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:06:55.916361  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:06:55.916578  153366 start.go:360] acquireMachinesLock for default-k8s-diff-port-697869: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:07:01.167159  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:07.247157  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:10.319093  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:16.399177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:19.471168  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:25.551154  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:28.623156  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:34.703152  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:37.775237  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:43.855164  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:46.927177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:53.007138  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:56.079172  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:02.159134  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:05.231114  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:11.311126  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:14.383170  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:20.463130  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:23.535190  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:29.615145  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:32.687246  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:38.767150  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:41.839214  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:47.919149  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:50.991177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:57.071142  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:00.143127  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:06.223158  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:09.295167  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:12.299677  152550 start.go:364] duration metric: took 4m34.363707329s to acquireMachinesLock for "embed-certs-923586"
	I0826 12:09:12.299740  152550 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:12.299746  152550 fix.go:54] fixHost starting: 
	I0826 12:09:12.300074  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:12.300107  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:12.316195  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0826 12:09:12.316679  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:12.317193  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:09:12.317222  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:12.317544  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:12.317738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:12.317890  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:09:12.319718  152550 fix.go:112] recreateIfNeeded on embed-certs-923586: state=Stopped err=<nil>
	I0826 12:09:12.319757  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	W0826 12:09:12.319928  152550 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:12.322756  152550 out.go:177] * Restarting existing kvm2 VM for "embed-certs-923586" ...
	I0826 12:09:12.324242  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Start
	I0826 12:09:12.324436  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring networks are active...
	I0826 12:09:12.325340  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network default is active
	I0826 12:09:12.325727  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network mk-embed-certs-923586 is active
	I0826 12:09:12.326016  152550 main.go:141] libmachine: (embed-certs-923586) Getting domain xml...
	I0826 12:09:12.326704  152550 main.go:141] libmachine: (embed-certs-923586) Creating domain...
	I0826 12:09:12.297008  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:12.297049  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297404  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:09:12.297433  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297769  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:09:12.299520  152463 machine.go:96] duration metric: took 4m37.402469334s to provisionDockerMachine
	I0826 12:09:12.299563  152463 fix.go:56] duration metric: took 4m37.426061512s for fixHost
	I0826 12:09:12.299570  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 4m37.426083406s
	W0826 12:09:12.299602  152463 start.go:714] error starting host: provision: host is not running
	W0826 12:09:12.299700  152463 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0826 12:09:12.299714  152463 start.go:729] Will try again in 5 seconds ...
	I0826 12:09:13.587774  152550 main.go:141] libmachine: (embed-certs-923586) Waiting to get IP...
	I0826 12:09:13.588804  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.589502  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.589606  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.589472  153863 retry.go:31] will retry after 233.612197ms: waiting for machine to come up
	I0826 12:09:13.825289  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.825694  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.825716  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.825640  153863 retry.go:31] will retry after 278.757003ms: waiting for machine to come up
	I0826 12:09:14.106215  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.106555  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.106604  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.106513  153863 retry.go:31] will retry after 438.455545ms: waiting for machine to come up
	I0826 12:09:14.546036  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.546434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.546461  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.546390  153863 retry.go:31] will retry after 471.25312ms: waiting for machine to come up
	I0826 12:09:15.019018  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.019413  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.019441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.019398  153863 retry.go:31] will retry after 547.251596ms: waiting for machine to come up
	I0826 12:09:15.568156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.568417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.568446  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.568366  153863 retry.go:31] will retry after 602.422279ms: waiting for machine to come up
	I0826 12:09:16.172056  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:16.172588  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:16.172613  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:16.172520  153863 retry.go:31] will retry after 990.562884ms: waiting for machine to come up
	I0826 12:09:17.164920  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:17.165417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:17.165441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:17.165361  153863 retry.go:31] will retry after 1.291254906s: waiting for machine to come up
	I0826 12:09:17.301413  152463 start.go:360] acquireMachinesLock for no-preload-956479: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:09:18.458402  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:18.458881  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:18.458913  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:18.458796  153863 retry.go:31] will retry after 1.757955514s: waiting for machine to come up
	I0826 12:09:20.218876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:20.219306  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:20.219329  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:20.219276  153863 retry.go:31] will retry after 1.629705685s: waiting for machine to come up
	I0826 12:09:21.850442  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:21.850858  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:21.850889  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:21.850800  153863 retry.go:31] will retry after 2.281035685s: waiting for machine to come up
	I0826 12:09:24.133867  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:24.134245  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:24.134273  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:24.134193  153863 retry.go:31] will retry after 3.498910639s: waiting for machine to come up
	I0826 12:09:27.635304  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:27.635727  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:27.635762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:27.635665  153863 retry.go:31] will retry after 3.250723751s: waiting for machine to come up
	I0826 12:09:32.191598  152982 start.go:364] duration metric: took 3m50.364189217s to acquireMachinesLock for "old-k8s-version-839656"
	I0826 12:09:32.191690  152982 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:32.191702  152982 fix.go:54] fixHost starting: 
	I0826 12:09:32.192120  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:32.192160  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:32.209470  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0826 12:09:32.209924  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:32.210423  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:09:32.210446  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:32.210781  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:32.210982  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:32.211153  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetState
	I0826 12:09:32.212801  152982 fix.go:112] recreateIfNeeded on old-k8s-version-839656: state=Stopped err=<nil>
	I0826 12:09:32.212839  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	W0826 12:09:32.213022  152982 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:32.215081  152982 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-839656" ...
	I0826 12:09:30.890060  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890595  152550 main.go:141] libmachine: (embed-certs-923586) Found IP for machine: 192.168.39.6
	I0826 12:09:30.890628  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has current primary IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890642  152550 main.go:141] libmachine: (embed-certs-923586) Reserving static IP address...
	I0826 12:09:30.891114  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.891138  152550 main.go:141] libmachine: (embed-certs-923586) DBG | skip adding static IP to network mk-embed-certs-923586 - found existing host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"}
	I0826 12:09:30.891148  152550 main.go:141] libmachine: (embed-certs-923586) Reserved static IP address: 192.168.39.6
	I0826 12:09:30.891160  152550 main.go:141] libmachine: (embed-certs-923586) Waiting for SSH to be available...
	I0826 12:09:30.891171  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Getting to WaitForSSH function...
	I0826 12:09:30.893189  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893470  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.893500  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893616  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH client type: external
	I0826 12:09:30.893640  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa (-rw-------)
	I0826 12:09:30.893682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:30.893696  152550 main.go:141] libmachine: (embed-certs-923586) DBG | About to run SSH command:
	I0826 12:09:30.893714  152550 main.go:141] libmachine: (embed-certs-923586) DBG | exit 0
	I0826 12:09:31.014809  152550 main.go:141] libmachine: (embed-certs-923586) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:31.015188  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetConfigRaw
	I0826 12:09:31.015829  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.018458  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.018812  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.018855  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.019100  152550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/config.json ...
	I0826 12:09:31.019329  152550 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:31.019348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.019561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.021826  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022132  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.022156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.022460  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022622  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022733  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.022906  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.023108  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.023121  152550 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:31.123039  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:31.123080  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123410  152550 buildroot.go:166] provisioning hostname "embed-certs-923586"
	I0826 12:09:31.123443  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.126455  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126777  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.126814  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126922  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.127161  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127351  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127522  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.127719  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.127909  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.127924  152550 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-923586 && echo "embed-certs-923586" | sudo tee /etc/hostname
	I0826 12:09:31.240946  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-923586
	
	I0826 12:09:31.240981  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.243695  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244041  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.244079  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244240  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.244453  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244617  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244742  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.244900  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.245095  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.245113  152550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-923586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-923586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-923586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:31.355875  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:31.355909  152550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:31.355933  152550 buildroot.go:174] setting up certificates
	I0826 12:09:31.355947  152550 provision.go:84] configureAuth start
	I0826 12:09:31.355960  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.356300  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.359092  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.359407  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359596  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.362078  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362396  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.362429  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362538  152550 provision.go:143] copyHostCerts
	I0826 12:09:31.362632  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:31.362656  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:31.362743  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:31.362888  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:31.362900  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:31.362939  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:31.363021  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:31.363031  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:31.363065  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:31.363135  152550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.embed-certs-923586 san=[127.0.0.1 192.168.39.6 embed-certs-923586 localhost minikube]
	I0826 12:09:31.549410  152550 provision.go:177] copyRemoteCerts
	I0826 12:09:31.549482  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:31.549517  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.552293  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552647  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.552681  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552914  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.553119  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.553276  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.553416  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:31.633032  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:31.657117  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:09:31.680707  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:31.703441  152550 provision.go:87] duration metric: took 347.478825ms to configureAuth
	I0826 12:09:31.703477  152550 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:31.703678  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:09:31.703752  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.706384  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.706876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.706909  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.707110  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.707364  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707762  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.708005  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.708232  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.708252  152550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:31.963380  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:31.963417  152550 machine.go:96] duration metric: took 944.071305ms to provisionDockerMachine
	I0826 12:09:31.963435  152550 start.go:293] postStartSetup for "embed-certs-923586" (driver="kvm2")
	I0826 12:09:31.963452  152550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:31.963481  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.963878  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:31.963913  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.966558  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.966981  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.967010  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.967186  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.967413  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.967587  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.967732  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.049232  152550 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:32.053165  152550 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:32.053195  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:32.053278  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:32.053378  152550 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:32.053495  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:32.062420  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:32.085277  152550 start.go:296] duration metric: took 121.824784ms for postStartSetup
	I0826 12:09:32.085335  152550 fix.go:56] duration metric: took 19.785587858s for fixHost
	I0826 12:09:32.085362  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.088039  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088332  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.088360  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088560  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.088832  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089012  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089191  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.089365  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:32.089529  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:32.089539  152550 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:32.191413  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674172.168471460
	
	I0826 12:09:32.191440  152550 fix.go:216] guest clock: 1724674172.168471460
	I0826 12:09:32.191450  152550 fix.go:229] Guest: 2024-08-26 12:09:32.16847146 +0000 UTC Remote: 2024-08-26 12:09:32.085340981 +0000 UTC m=+294.301169364 (delta=83.130479ms)
	I0826 12:09:32.191485  152550 fix.go:200] guest clock delta is within tolerance: 83.130479ms
	I0826 12:09:32.191493  152550 start.go:83] releasing machines lock for "embed-certs-923586", held for 19.891774014s
	I0826 12:09:32.191526  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.191861  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:32.194589  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.194980  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.195019  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.195207  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.195866  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196071  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196167  152550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:32.196288  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.196319  152550 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:32.196348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.199088  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199546  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.199598  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199776  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.199977  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200105  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.200124  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.200148  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200317  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.200367  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.200482  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200663  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200824  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.285244  152550 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:32.317027  152550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:32.466233  152550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:32.472677  152550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:32.472768  152550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:32.490080  152550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:32.490111  152550 start.go:495] detecting cgroup driver to use...
	I0826 12:09:32.490189  152550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:32.509031  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:32.524361  152550 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:32.524417  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:32.539259  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:32.553276  152550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:32.676018  152550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:32.833702  152550 docker.go:233] disabling docker service ...
	I0826 12:09:32.833779  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:32.851253  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:32.865578  152550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:33.000922  152550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:33.129916  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:33.144209  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:33.162946  152550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:09:33.163010  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.174271  152550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:33.174360  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.189085  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.204388  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.218151  152550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:33.234931  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.257016  152550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.280905  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.293033  152550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:33.303161  152550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:33.303235  152550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:33.316560  152550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:33.326319  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:33.449279  152550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:33.587642  152550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:33.587722  152550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:33.592423  152550 start.go:563] Will wait 60s for crictl version
	I0826 12:09:33.592495  152550 ssh_runner.go:195] Run: which crictl
	I0826 12:09:33.596628  152550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:33.633109  152550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:33.633225  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.661128  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.692222  152550 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:09:32.216396  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .Start
	I0826 12:09:32.216630  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring networks are active...
	I0826 12:09:32.217414  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network default is active
	I0826 12:09:32.217851  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network mk-old-k8s-version-839656 is active
	I0826 12:09:32.218286  152982 main.go:141] libmachine: (old-k8s-version-839656) Getting domain xml...
	I0826 12:09:32.219128  152982 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 12:09:33.500501  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting to get IP...
	I0826 12:09:33.501678  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.502100  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.502202  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.502072  154009 retry.go:31] will retry after 193.282008ms: waiting for machine to come up
	I0826 12:09:33.697223  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.697688  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.697760  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.697669  154009 retry.go:31] will retry after 252.110347ms: waiting for machine to come up
	I0826 12:09:33.951330  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.952639  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.952677  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.952616  154009 retry.go:31] will retry after 436.954293ms: waiting for machine to come up
	I0826 12:09:34.391109  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.391724  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.391759  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.391676  154009 retry.go:31] will retry after 402.13367ms: waiting for machine to come up
	I0826 12:09:34.795471  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.796036  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.796060  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.795991  154009 retry.go:31] will retry after 738.867168ms: waiting for machine to come up
	I0826 12:09:35.537041  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:35.537518  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:35.537539  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:35.537476  154009 retry.go:31] will retry after 884.001928ms: waiting for machine to come up
	I0826 12:09:36.423984  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:36.424400  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:36.424432  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:36.424336  154009 retry.go:31] will retry after 958.887984ms: waiting for machine to come up
	I0826 12:09:33.693650  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:33.696950  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:33.697385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697661  152550 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:33.701975  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:33.715404  152550 kubeadm.go:883] updating cluster {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:33.715541  152550 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:09:33.715646  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:33.756477  152550 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:09:33.756546  152550 ssh_runner.go:195] Run: which lz4
	I0826 12:09:33.761027  152550 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:33.765139  152550 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:33.765181  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:09:35.106552  152550 crio.go:462] duration metric: took 1.345552742s to copy over tarball
	I0826 12:09:35.106656  152550 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:37.299491  152550 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.192805053s)
	I0826 12:09:37.299548  152550 crio.go:469] duration metric: took 2.192938832s to extract the tarball
	I0826 12:09:37.299560  152550 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:37.337654  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:37.378117  152550 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:09:37.378144  152550 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:09:37.378155  152550 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0826 12:09:37.378276  152550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-923586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:37.378375  152550 ssh_runner.go:195] Run: crio config
	I0826 12:09:37.438148  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:37.438182  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:37.438200  152550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:37.438229  152550 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-923586 NodeName:embed-certs-923586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:09:37.438436  152550 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-923586"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:37.438525  152550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:09:37.451742  152550 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:37.451824  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:37.463078  152550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0826 12:09:37.481563  152550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:37.499615  152550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0826 12:09:37.518753  152550 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:37.523612  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:37.535774  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:37.664131  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:37.681227  152550 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586 for IP: 192.168.39.6
	I0826 12:09:37.681254  152550 certs.go:194] generating shared ca certs ...
	I0826 12:09:37.681293  152550 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:37.681467  152550 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:37.681529  152550 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:37.681542  152550 certs.go:256] generating profile certs ...
	I0826 12:09:37.681665  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/client.key
	I0826 12:09:37.681751  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key.f0cd25f6
	I0826 12:09:37.681813  152550 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key
	I0826 12:09:37.681967  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:37.682018  152550 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:37.682029  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:37.682064  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:37.682100  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:37.682136  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:37.682199  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:37.683214  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:37.721802  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:37.756110  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:09:37.786038  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:09:37.818026  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0826 12:09:37.385261  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:37.385737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:37.385767  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:37.385679  154009 retry.go:31] will retry after 991.322442ms: waiting for machine to come up
	I0826 12:09:38.379002  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:38.379428  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:38.379457  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:38.379382  154009 retry.go:31] will retry after 1.199531339s: waiting for machine to come up
	I0826 12:09:39.581068  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:39.581551  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:39.581581  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:39.581506  154009 retry.go:31] will retry after 1.74680502s: waiting for machine to come up
	I0826 12:09:41.330775  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:41.331224  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:41.331254  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:41.331170  154009 retry.go:31] will retry after 2.648889988s: waiting for machine to come up
	I0826 12:09:37.843982  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:09:37.869902  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:09:37.893757  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:09:37.917320  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:09:37.940492  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:09:37.964211  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:09:37.987907  152550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:09:38.004414  152550 ssh_runner.go:195] Run: openssl version
	I0826 12:09:38.010144  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:09:38.020820  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025245  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025324  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.031174  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:09:38.041847  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:09:38.052764  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057501  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057591  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.063840  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:09:38.075173  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:09:38.085770  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089921  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089986  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.095373  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:09:38.105709  152550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:09:38.110189  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:09:38.115952  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:09:38.121463  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:09:38.127423  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:09:38.132968  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:09:38.138735  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:09:38.144517  152550 kubeadm.go:392] StartCluster: {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:09:38.144671  152550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:09:38.144748  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.179325  152550 cri.go:89] found id: ""
	I0826 12:09:38.179409  152550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:09:38.189261  152550 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:09:38.189296  152550 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:09:38.189368  152550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:09:38.198923  152550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:09:38.200065  152550 kubeconfig.go:125] found "embed-certs-923586" server: "https://192.168.39.6:8443"
	I0826 12:09:38.202145  152550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:09:38.211371  152550 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.6
	I0826 12:09:38.211415  152550 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:09:38.211431  152550 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:09:38.211501  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.245861  152550 cri.go:89] found id: ""
	I0826 12:09:38.245945  152550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:09:38.262469  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:09:38.272693  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:09:38.272721  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:09:38.272780  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:09:38.281704  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:09:38.281779  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:09:38.291042  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:09:38.299990  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:09:38.300057  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:09:38.309982  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.319474  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:09:38.319536  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.329345  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:09:38.338548  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:09:38.338649  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:09:38.349124  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:09:38.359112  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:38.470240  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.758142  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.28785788s)
	I0826 12:09:39.758180  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.973482  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.044459  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.143679  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:09:40.143844  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:40.644217  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.144357  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.160970  152550 api_server.go:72] duration metric: took 1.017300298s to wait for apiserver process to appear ...
	I0826 12:09:41.161005  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:09:41.161032  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.548928  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.548971  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.548988  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.580924  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.580991  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.661191  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.667248  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:43.667278  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.161959  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.177173  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.177216  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.661798  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.668406  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.668456  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:45.162005  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:45.168111  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:09:45.174487  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:09:45.174525  152550 api_server.go:131] duration metric: took 4.013513808s to wait for apiserver health ...
	I0826 12:09:45.174536  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:45.174543  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:45.176809  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:09:43.982234  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:43.982681  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:43.982714  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:43.982593  154009 retry.go:31] will retry after 2.916473093s: waiting for machine to come up
	I0826 12:09:45.178235  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:09:45.189704  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:09:45.250046  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:09:45.262420  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:09:45.262460  152550 system_pods.go:61] "coredns-6f6b679f8f-h4wmk" [39b276c0-68ef-4dc9-9f73-ee79c2c14625] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262467  152550 system_pods.go:61] "coredns-6f6b679f8f-l5z8f" [7e0082cc-2364-499c-bdb8-5f2ee7ee5fa7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262473  152550 system_pods.go:61] "etcd-embed-certs-923586" [06d68f69-a99f-4b34-87c7-e2fb80cdd886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:09:45.262481  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [2d0952e2-f5d9-49e8-b957-00f92dbbc436] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:09:45.262490  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [2e632e39-6249-40e3-82ab-74e820a84f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:09:45.262495  152550 system_pods.go:61] "kube-proxy-wfl6s" [9f690d4f-11ee-4e67-aa8a-2c3e304d699d] Running
	I0826 12:09:45.262500  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [47d66689-0a4c-4811-b4f0-2481034f1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:09:45.262505  152550 system_pods.go:61] "metrics-server-6867b74b74-cw5t8" [1bced435-db48-46d6-9c76-fb13050a7851] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:09:45.262510  152550 system_pods.go:61] "storage-provisioner" [259f7851-96da-42c3-aae3-35d13ec21573] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:09:45.262522  152550 system_pods.go:74] duration metric: took 12.449002ms to wait for pod list to return data ...
	I0826 12:09:45.262531  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:09:45.276323  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:09:45.276359  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:09:45.276372  152550 node_conditions.go:105] duration metric: took 13.836307ms to run NodePressure ...
	I0826 12:09:45.276389  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:45.558970  152550 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563147  152550 kubeadm.go:739] kubelet initialised
	I0826 12:09:45.563168  152550 kubeadm.go:740] duration metric: took 4.16477ms waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563176  152550 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:09:45.574933  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.581504  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581530  152550 pod_ready.go:82] duration metric: took 6.568456ms for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.581548  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581557  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.587904  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587935  152550 pod_ready.go:82] duration metric: took 6.368664ms for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.587945  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587956  152550 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.592416  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592440  152550 pod_ready.go:82] duration metric: took 4.475923ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.592448  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592453  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.654230  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654265  152550 pod_ready.go:82] duration metric: took 61.80344ms for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.654275  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654282  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:47.659899  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:46.902687  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:46.903209  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:46.903243  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:46.903150  154009 retry.go:31] will retry after 4.06528556s: waiting for machine to come up
	I0826 12:09:50.972745  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973257  152982 main.go:141] libmachine: (old-k8s-version-839656) Found IP for machine: 192.168.72.136
	I0826 12:09:50.973280  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserving static IP address...
	I0826 12:09:50.973297  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has current primary IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.973653  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | skip adding static IP to network mk-old-k8s-version-839656 - found existing host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"}
	I0826 12:09:50.973672  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserved static IP address: 192.168.72.136
	I0826 12:09:50.973693  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting for SSH to be available...
	I0826 12:09:50.973737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Getting to WaitForSSH function...
	I0826 12:09:50.976028  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976406  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.976438  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976544  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH client type: external
	I0826 12:09:50.976598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa (-rw-------)
	I0826 12:09:50.976622  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:50.976632  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | About to run SSH command:
	I0826 12:09:50.976642  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | exit 0
	I0826 12:09:51.107476  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:51.107964  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 12:09:51.108748  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.111740  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112251  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.112281  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112613  152982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 12:09:51.112820  152982 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:51.112842  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.113094  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.115616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116011  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.116042  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116213  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.116382  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116483  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116618  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.116815  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.117105  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.117120  152982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:51.219189  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:51.219220  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219528  152982 buildroot.go:166] provisioning hostname "old-k8s-version-839656"
	I0826 12:09:51.219558  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219798  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.222773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223300  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.223337  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223511  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.223750  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.223975  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.224156  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.224364  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.224610  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.224625  152982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-839656 && echo "old-k8s-version-839656" | sudo tee /etc/hostname
	I0826 12:09:51.340951  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-839656
	
	I0826 12:09:51.340995  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.343773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344119  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.344144  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344312  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.344531  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344731  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344865  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.345037  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.345207  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.345224  152982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-839656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-839656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-839656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:51.456135  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:51.456180  152982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:51.456233  152982 buildroot.go:174] setting up certificates
	I0826 12:09:51.456247  152982 provision.go:84] configureAuth start
	I0826 12:09:51.456263  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.456585  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.459426  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.459852  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.459895  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.460083  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.462404  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462754  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.462788  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462984  152982 provision.go:143] copyHostCerts
	I0826 12:09:51.463042  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:51.463061  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:51.463118  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:51.463225  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:51.463235  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:51.463255  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:51.463306  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:51.463313  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:51.463331  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:51.463381  152982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-839656 san=[127.0.0.1 192.168.72.136 localhost minikube old-k8s-version-839656]
	I0826 12:09:51.533462  152982 provision.go:177] copyRemoteCerts
	I0826 12:09:51.533528  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:51.533556  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.536586  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.536967  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.536991  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.537268  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.537519  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.537729  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.537894  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:51.617503  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:51.642966  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0826 12:09:51.669120  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:51.693595  152982 provision.go:87] duration metric: took 237.331736ms to configureAuth
	I0826 12:09:51.693629  152982 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:51.693808  152982 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:09:51.693895  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.697161  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697508  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.697553  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697789  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.698042  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698207  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698394  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.698565  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.698798  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.698819  152982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:52.187972  153366 start.go:364] duration metric: took 2m56.271360342s to acquireMachinesLock for "default-k8s-diff-port-697869"
	I0826 12:09:52.188045  153366 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:52.188053  153366 fix.go:54] fixHost starting: 
	I0826 12:09:52.188497  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:52.188541  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:52.209451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0826 12:09:52.209960  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:52.210572  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:09:52.210591  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:52.211008  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:52.211232  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:09:52.211382  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:09:52.213165  153366 fix.go:112] recreateIfNeeded on default-k8s-diff-port-697869: state=Stopped err=<nil>
	I0826 12:09:52.213198  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	W0826 12:09:52.213359  153366 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:52.215535  153366 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-697869" ...
	I0826 12:09:49.662002  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.663287  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.959544  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:51.959580  152982 machine.go:96] duration metric: took 846.74482ms to provisionDockerMachine
	I0826 12:09:51.959595  152982 start.go:293] postStartSetup for "old-k8s-version-839656" (driver="kvm2")
	I0826 12:09:51.959606  152982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:51.959628  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.959989  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:51.960024  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.962912  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963278  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.963304  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963520  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.963756  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.963954  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.964082  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.046059  152982 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:52.050013  152982 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:52.050045  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:52.050119  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:52.050225  152982 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:52.050345  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:52.059871  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:52.082494  152982 start.go:296] duration metric: took 122.880191ms for postStartSetup
	I0826 12:09:52.082546  152982 fix.go:56] duration metric: took 19.890844987s for fixHost
	I0826 12:09:52.082576  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.085291  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085726  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.085772  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085898  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.086116  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086307  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086457  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.086659  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:52.086841  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:52.086856  152982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:52.187806  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674192.159623045
	
	I0826 12:09:52.187839  152982 fix.go:216] guest clock: 1724674192.159623045
	I0826 12:09:52.187846  152982 fix.go:229] Guest: 2024-08-26 12:09:52.159623045 +0000 UTC Remote: 2024-08-26 12:09:52.082552402 +0000 UTC m=+250.413281630 (delta=77.070643ms)
	I0826 12:09:52.187868  152982 fix.go:200] guest clock delta is within tolerance: 77.070643ms
	I0826 12:09:52.187873  152982 start.go:83] releasing machines lock for "old-k8s-version-839656", held for 19.996211523s
	I0826 12:09:52.187905  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.188210  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:52.191003  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191480  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.191511  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191670  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192375  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192595  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192733  152982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:52.192794  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.192854  152982 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:52.192883  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.195598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195757  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195965  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.195994  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196172  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196256  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.196290  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196424  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196463  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196624  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196627  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196812  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196842  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.196954  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.304741  152982 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:52.311072  152982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:52.457568  152982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:52.465123  152982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:52.465211  152982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:52.487320  152982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:52.487351  152982 start.go:495] detecting cgroup driver to use...
	I0826 12:09:52.487459  152982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:52.509680  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:52.526517  152982 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:52.526615  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:52.540741  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:52.554819  152982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:52.677611  152982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:52.829605  152982 docker.go:233] disabling docker service ...
	I0826 12:09:52.829706  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:52.844862  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:52.859869  152982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:53.021968  152982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:53.156768  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:53.173028  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:53.194573  152982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 12:09:53.194641  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.204783  152982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:53.204873  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.215395  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.225578  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.235810  152982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:53.246635  152982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:53.257399  152982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:53.257467  152982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:53.273553  152982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:53.283339  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:53.432394  152982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:53.583340  152982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:53.583443  152982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:53.590729  152982 start.go:563] Will wait 60s for crictl version
	I0826 12:09:53.590877  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:53.596292  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:53.656413  152982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:53.656523  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.685569  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.716571  152982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 12:09:52.217358  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Start
	I0826 12:09:52.217561  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring networks are active...
	I0826 12:09:52.218523  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network default is active
	I0826 12:09:52.218930  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network mk-default-k8s-diff-port-697869 is active
	I0826 12:09:52.219443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Getting domain xml...
	I0826 12:09:52.220240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Creating domain...
	I0826 12:09:53.637205  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting to get IP...
	I0826 12:09:53.638259  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638719  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638757  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.638648  154153 retry.go:31] will retry after 309.073725ms: waiting for machine to come up
	I0826 12:09:53.949323  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.949986  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.950021  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.949941  154153 retry.go:31] will retry after 389.554302ms: waiting for machine to come up
	I0826 12:09:54.341836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342416  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.342359  154153 retry.go:31] will retry after 314.065385ms: waiting for machine to come up
	I0826 12:09:54.657763  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658394  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658425  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.658336  154153 retry.go:31] will retry after 564.24487ms: waiting for machine to come up
	I0826 12:09:55.224230  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224610  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.224578  154153 retry.go:31] will retry after 685.123739ms: waiting for machine to come up
	I0826 12:09:53.718104  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:53.721461  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.721900  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:53.721938  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.722137  152982 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:53.726404  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:53.738999  152982 kubeadm.go:883] updating cluster {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:53.739130  152982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 12:09:53.739182  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:53.791456  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:53.791561  152982 ssh_runner.go:195] Run: which lz4
	I0826 12:09:53.795624  152982 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:53.799857  152982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:53.799892  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 12:09:55.402637  152982 crio.go:462] duration metric: took 1.607044522s to copy over tarball
	I0826 12:09:55.402746  152982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:54.163063  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.662394  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.662428  152550 pod_ready.go:82] duration metric: took 10.008136426s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.662445  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668522  152550 pod_ready.go:93] pod "kube-proxy-wfl6s" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.668557  152550 pod_ready.go:82] duration metric: took 6.10318ms for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668571  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:57.675036  152550 pod_ready.go:103] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.911914  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912484  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.912420  154153 retry.go:31] will retry after 578.675355ms: waiting for machine to come up
	I0826 12:09:56.493183  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493668  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:56.493552  154153 retry.go:31] will retry after 793.710444ms: waiting for machine to come up
	I0826 12:09:57.289554  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290128  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290160  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:57.290070  154153 retry.go:31] will retry after 1.099676217s: waiting for machine to come up
	I0826 12:09:58.391500  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392029  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392060  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:58.391966  154153 retry.go:31] will retry after 1.753296062s: waiting for machine to come up
	I0826 12:10:00.148179  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148759  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148795  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:00.148689  154153 retry.go:31] will retry after 1.591840738s: waiting for machine to come up
	I0826 12:09:58.462705  152982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059925579s)
	I0826 12:09:58.462738  152982 crio.go:469] duration metric: took 3.060066141s to extract the tarball
	I0826 12:09:58.462748  152982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:58.504763  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:58.547876  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:58.547908  152982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:09:58.548002  152982 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.548020  152982 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.548047  152982 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.548058  152982 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.548025  152982 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.548107  152982 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.548041  152982 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 12:09:58.548004  152982 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550035  152982 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.550050  152982 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.550064  152982 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.550039  152982 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 12:09:58.550090  152982 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550045  152982 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.550125  152982 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.550071  152982 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.785285  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.798866  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 12:09:58.801333  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.803488  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.845454  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.845683  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.851257  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.875512  152982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 12:09:58.875632  152982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.875702  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.899151  152982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 12:09:58.899204  152982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 12:09:58.899268  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.947547  152982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 12:09:58.947602  152982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.947657  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.960126  152982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 12:09:58.960178  152982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.960229  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.978450  152982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 12:09:58.978504  152982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.978571  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.981296  152982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 12:09:58.981335  152982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.981378  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990296  152982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 12:09:58.990341  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.990351  152982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.990398  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990481  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:58.990549  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.990624  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.993238  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.993297  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.117393  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.117394  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.137340  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.137381  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.137396  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.139282  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.140553  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.237314  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.242110  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.293209  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.293288  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.310442  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.316239  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.316345  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.382180  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.382851  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:59.389447  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 12:09:59.454424  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 12:09:59.484709  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 12:09:59.491496  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 12:09:59.491517  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 12:09:59.491555  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 12:09:59.495411  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 12:09:59.614016  152982 cache_images.go:92] duration metric: took 1.066082637s to LoadCachedImages
	W0826 12:09:59.614118  152982 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0826 12:09:59.614133  152982 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.20.0 crio true true} ...
	I0826 12:09:59.614248  152982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-839656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:59.614345  152982 ssh_runner.go:195] Run: crio config
	I0826 12:09:59.661938  152982 cni.go:84] Creating CNI manager for ""
	I0826 12:09:59.661962  152982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:59.661975  152982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:59.661994  152982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-839656 NodeName:old-k8s-version-839656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 12:09:59.662131  152982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-839656"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:59.662212  152982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 12:09:59.672820  152982 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:59.672907  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:59.682949  152982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0826 12:09:59.701705  152982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:59.719839  152982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0826 12:09:59.737712  152982 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:59.741301  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:59.753857  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:59.877969  152982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:59.896278  152982 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656 for IP: 192.168.72.136
	I0826 12:09:59.896306  152982 certs.go:194] generating shared ca certs ...
	I0826 12:09:59.896337  152982 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:59.896522  152982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:59.896620  152982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:59.896640  152982 certs.go:256] generating profile certs ...
	I0826 12:09:59.896769  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key
	I0826 12:09:59.896903  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261
	I0826 12:09:59.896972  152982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key
	I0826 12:09:59.897126  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:59.897165  152982 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:59.897178  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:59.897216  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:59.897261  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:59.897303  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:59.897362  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:59.898051  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:59.938407  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:59.983455  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:00.021803  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:00.058157  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 12:10:00.095920  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:00.133185  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:00.167537  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:00.193940  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:00.220558  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:00.245567  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:00.274758  152982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:00.296741  152982 ssh_runner.go:195] Run: openssl version
	I0826 12:10:00.305185  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:00.321395  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326339  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326422  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.332789  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:00.343971  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:00.355979  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360900  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360985  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.367085  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:00.379942  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:00.391907  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396769  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396845  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.403009  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:00.416262  152982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:00.422105  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:00.428526  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:00.435241  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:00.441902  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:00.448502  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:00.455012  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:00.461390  152982 kubeadm.go:392] StartCluster: {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:00.461533  152982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:00.461596  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.503939  152982 cri.go:89] found id: ""
	I0826 12:10:00.504026  152982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:00.515410  152982 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:00.515434  152982 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:00.515483  152982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:00.527240  152982 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:00.528558  152982 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:10:00.529540  152982 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-839656" cluster setting kubeconfig missing "old-k8s-version-839656" context setting]
	I0826 12:10:00.530977  152982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:00.618477  152982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:00.630233  152982 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
	I0826 12:10:00.630283  152982 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:00.630300  152982 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:00.630367  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.667438  152982 cri.go:89] found id: ""
	I0826 12:10:00.667535  152982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:00.685319  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:00.695968  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:00.696003  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:00.696087  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:00.706519  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:00.706583  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:00.716807  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:00.726555  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:00.726637  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:00.737356  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.747702  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:00.747773  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.758771  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:00.769257  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:00.769345  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:00.780102  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:00.791976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:00.922432  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:58.196998  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:58.197024  152550 pod_ready.go:82] duration metric: took 2.528445128s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:58.197035  152550 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:00.486854  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:02.704500  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:01.741774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742399  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:01.742299  154153 retry.go:31] will retry after 2.754846919s: waiting for machine to come up
	I0826 12:10:04.499575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499918  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499950  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:04.499866  154153 retry.go:31] will retry after 2.260097113s: waiting for machine to come up
	I0826 12:10:02.146027  152982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223548629s)
	I0826 12:10:02.146087  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.407469  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.511616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.629123  152982 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:02.629250  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.129448  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.629685  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.129759  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.629807  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.129526  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.629782  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.129949  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.630031  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.203846  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:07.703046  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:06.761311  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761805  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:06.761731  154153 retry.go:31] will retry after 3.424580644s: waiting for machine to come up
	I0826 12:10:10.188178  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188746  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has current primary IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Found IP for machine: 192.168.61.11
	I0826 12:10:10.188789  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserving static IP address...
	I0826 12:10:10.189233  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.189270  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | skip adding static IP to network mk-default-k8s-diff-port-697869 - found existing host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"}
	I0826 12:10:10.189292  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserved static IP address: 192.168.61.11
	I0826 12:10:10.189312  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for SSH to be available...
	I0826 12:10:10.189327  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Getting to WaitForSSH function...
	I0826 12:10:10.191775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192162  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.192192  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192272  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH client type: external
	I0826 12:10:10.192300  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa (-rw-------)
	I0826 12:10:10.192332  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:10.192351  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | About to run SSH command:
	I0826 12:10:10.192364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | exit 0
	I0826 12:10:10.315078  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:10.315506  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetConfigRaw
	I0826 12:10:10.316191  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.318850  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319207  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.319235  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319491  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:10:10.319715  153366 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:10.319736  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:10.320045  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.322352  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322660  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.322682  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322852  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.323067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323216  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323371  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.323524  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.323732  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.323745  153366 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:10.427284  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:10.427314  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427630  153366 buildroot.go:166] provisioning hostname "default-k8s-diff-port-697869"
	I0826 12:10:10.427661  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.430485  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.430865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.430894  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.431065  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.431240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431388  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431507  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.431631  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.431804  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.431818  153366 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-697869 && echo "default-k8s-diff-port-697869" | sudo tee /etc/hostname
	I0826 12:10:10.544414  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-697869
	
	I0826 12:10:10.544455  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.547901  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548333  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.548375  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548612  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.548835  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549074  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549250  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.549458  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.549632  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.549648  153366 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-697869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-697869/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-697869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:10.659809  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:10.659858  153366 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:10.659937  153366 buildroot.go:174] setting up certificates
	I0826 12:10:10.659957  153366 provision.go:84] configureAuth start
	I0826 12:10:10.659978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.660304  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.663231  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.663628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663798  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.666261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666603  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.666630  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666827  153366 provision.go:143] copyHostCerts
	I0826 12:10:10.666912  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:10.666937  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:10.667005  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:10.667125  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:10.667137  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:10.667164  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:10.667239  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:10.667249  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:10.667273  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:10.667344  153366 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-697869 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-697869 localhost minikube]
	I0826 12:10:11.491531  152463 start.go:364] duration metric: took 54.190046907s to acquireMachinesLock for "no-preload-956479"
	I0826 12:10:11.491592  152463 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:10:11.491601  152463 fix.go:54] fixHost starting: 
	I0826 12:10:11.492032  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:10:11.492066  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:10:11.509260  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
	I0826 12:10:11.509870  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:10:11.510401  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:10:11.510433  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:10:11.510772  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:10:11.510983  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:11.511151  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:10:11.513024  152463 fix.go:112] recreateIfNeeded on no-preload-956479: state=Stopped err=<nil>
	I0826 12:10:11.513048  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	W0826 12:10:11.513218  152463 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:10:11.515241  152463 out.go:177] * Restarting existing kvm2 VM for "no-preload-956479" ...
	I0826 12:10:07.129729  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:07.629445  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.129308  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.629701  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.130082  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.629958  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.129963  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.629747  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.130061  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.630060  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.703400  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:11.703487  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:10.808804  153366 provision.go:177] copyRemoteCerts
	I0826 12:10:10.808865  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:10.808893  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.811758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812215  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.812251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812451  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.812664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.812817  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.813020  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:10.905741  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:10.931863  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0826 12:10:10.958232  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:10.983737  153366 provision.go:87] duration metric: took 323.761817ms to configureAuth
	I0826 12:10:10.983774  153366 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:10.983992  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:10.984092  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.986976  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987357  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.987386  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.987842  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.987978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.988105  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.988276  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.988443  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.988459  153366 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:11.257812  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:11.257846  153366 machine.go:96] duration metric: took 938.116965ms to provisionDockerMachine
	I0826 12:10:11.257861  153366 start.go:293] postStartSetup for "default-k8s-diff-port-697869" (driver="kvm2")
	I0826 12:10:11.257872  153366 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:11.257889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.258214  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:11.258246  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.261404  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261680  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.261702  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261886  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.262067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.262214  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.262386  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.345667  153366 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:11.349967  153366 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:11.350004  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:11.350084  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:11.350186  153366 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:11.350308  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:11.361671  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:11.386178  153366 start.go:296] duration metric: took 128.298803ms for postStartSetup
	I0826 12:10:11.386233  153366 fix.go:56] duration metric: took 19.198180603s for fixHost
	I0826 12:10:11.386258  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.389263  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389579  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.389606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389838  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.390034  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390172  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390287  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.390479  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:11.390666  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:11.390678  153366 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:11.491363  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674211.462689704
	
	I0826 12:10:11.491389  153366 fix.go:216] guest clock: 1724674211.462689704
	I0826 12:10:11.491401  153366 fix.go:229] Guest: 2024-08-26 12:10:11.462689704 +0000 UTC Remote: 2024-08-26 12:10:11.386238136 +0000 UTC m=+195.618286719 (delta=76.451568ms)
	I0826 12:10:11.491428  153366 fix.go:200] guest clock delta is within tolerance: 76.451568ms
	I0826 12:10:11.491433  153366 start.go:83] releasing machines lock for "default-k8s-diff-port-697869", held for 19.303413047s
	I0826 12:10:11.491459  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.491760  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:11.494596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495094  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.495124  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495330  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.495889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496208  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496333  153366 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:11.496390  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.496433  153366 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:11.496456  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.499087  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499442  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499469  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499705  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499728  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499751  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.499964  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500007  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.500134  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500164  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500359  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500349  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.500509  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.612518  153366 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:11.618693  153366 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:11.766025  153366 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:11.772405  153366 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:11.772476  153366 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:11.790401  153366 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:11.790433  153366 start.go:495] detecting cgroup driver to use...
	I0826 12:10:11.790505  153366 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:11.806946  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:11.822137  153366 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:11.822199  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:11.836496  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:11.851090  153366 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:11.963366  153366 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:12.113326  153366 docker.go:233] disabling docker service ...
	I0826 12:10:12.113402  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:12.131489  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:12.148801  153366 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:12.293074  153366 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:12.420202  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:12.435061  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:12.455192  153366 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:12.455268  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.467004  153366 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:12.467079  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.477903  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.488979  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.500322  153366 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:12.513490  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.525746  153366 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.544944  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.556159  153366 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:12.566333  153366 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:12.566420  153366 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:12.584702  153366 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:12.596221  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:12.740368  153366 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:12.882412  153366 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:12.882501  153366 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:12.888373  153366 start.go:563] Will wait 60s for crictl version
	I0826 12:10:12.888446  153366 ssh_runner.go:195] Run: which crictl
	I0826 12:10:12.892415  153366 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:12.930486  153366 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:12.930577  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.959322  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.997340  153366 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:11.516801  152463 main.go:141] libmachine: (no-preload-956479) Calling .Start
	I0826 12:10:11.517026  152463 main.go:141] libmachine: (no-preload-956479) Ensuring networks are active...
	I0826 12:10:11.517932  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network default is active
	I0826 12:10:11.518378  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network mk-no-preload-956479 is active
	I0826 12:10:11.518950  152463 main.go:141] libmachine: (no-preload-956479) Getting domain xml...
	I0826 12:10:11.519889  152463 main.go:141] libmachine: (no-preload-956479) Creating domain...
	I0826 12:10:12.859267  152463 main.go:141] libmachine: (no-preload-956479) Waiting to get IP...
	I0826 12:10:12.860407  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:12.860889  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:12.860933  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:12.860840  154342 retry.go:31] will retry after 295.429691ms: waiting for machine to come up
	I0826 12:10:13.158650  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.159259  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.159290  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.159207  154342 retry.go:31] will retry after 385.646499ms: waiting for machine to come up
	I0826 12:10:13.547162  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.547722  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.547754  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.547631  154342 retry.go:31] will retry after 390.965905ms: waiting for machine to come up
	I0826 12:10:13.940240  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.940777  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.940820  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.940714  154342 retry.go:31] will retry after 457.995779ms: waiting for machine to come up
	I0826 12:10:14.400465  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:14.400981  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:14.401016  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:14.400917  154342 retry.go:31] will retry after 697.078299ms: waiting for machine to come up
	I0826 12:10:12.998786  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:13.001919  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002340  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:13.002376  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002627  153366 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:13.007888  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:13.023470  153366 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:13.023599  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:13.023666  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:13.060321  153366 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:13.060405  153366 ssh_runner.go:195] Run: which lz4
	I0826 12:10:13.064638  153366 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:10:13.069089  153366 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:10:13.069126  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:10:14.437617  153366 crio.go:462] duration metric: took 1.373030307s to copy over tarball
	I0826 12:10:14.437710  153366 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:10:12.129652  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:12.630076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.129342  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.630081  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.130129  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.629381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.129909  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.630114  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.129784  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.629463  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.704867  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:16.204819  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:15.099404  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:15.100002  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:15.100035  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:15.099956  154342 retry.go:31] will retry after 947.348263ms: waiting for machine to come up
	I0826 12:10:16.048628  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:16.049166  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:16.049185  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:16.049113  154342 retry.go:31] will retry after 1.169467339s: waiting for machine to come up
	I0826 12:10:17.219998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:17.220564  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:17.220589  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:17.220541  154342 retry.go:31] will retry after 945.873541ms: waiting for machine to come up
	I0826 12:10:18.167823  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:18.168351  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:18.168377  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:18.168272  154342 retry.go:31] will retry after 1.495556294s: waiting for machine to come up
	I0826 12:10:19.666032  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:19.666629  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:19.666656  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:19.666551  154342 retry.go:31] will retry after 1.710448725s: waiting for machine to come up
	I0826 12:10:16.739676  153366 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301910814s)
	I0826 12:10:16.739718  153366 crio.go:469] duration metric: took 2.302064986s to extract the tarball
	I0826 12:10:16.739729  153366 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:10:16.777127  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:16.820340  153366 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:10:16.820367  153366 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:10:16.820376  153366 kubeadm.go:934] updating node { 192.168.61.11 8444 v1.31.0 crio true true} ...
	I0826 12:10:16.820500  153366 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-697869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:16.820619  153366 ssh_runner.go:195] Run: crio config
	I0826 12:10:16.868670  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:16.868694  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:16.868708  153366 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:16.868738  153366 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-697869 NodeName:default-k8s-diff-port-697869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:16.868915  153366 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-697869"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:16.869010  153366 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:16.883092  153366 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:16.883230  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:16.893951  153366 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0826 12:10:16.911836  153366 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:16.928582  153366 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0826 12:10:16.945593  153366 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:16.949432  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:16.961659  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:17.085246  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:17.103244  153366 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869 for IP: 192.168.61.11
	I0826 12:10:17.103271  153366 certs.go:194] generating shared ca certs ...
	I0826 12:10:17.103302  153366 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:17.103510  153366 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:17.103575  153366 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:17.103585  153366 certs.go:256] generating profile certs ...
	I0826 12:10:17.103700  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/client.key
	I0826 12:10:17.103787  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key.bfd30dfa
	I0826 12:10:17.103839  153366 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key
	I0826 12:10:17.103989  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:17.104033  153366 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:17.104045  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:17.104088  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:17.104138  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:17.104169  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:17.104226  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:17.105131  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:17.133445  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:17.170369  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:17.203828  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:17.239736  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0826 12:10:17.270804  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:10:17.311143  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:17.337241  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:10:17.361255  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:17.389089  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:17.415203  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:17.440069  153366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:17.457711  153366 ssh_runner.go:195] Run: openssl version
	I0826 12:10:17.463825  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:17.475007  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479590  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479674  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.485682  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:17.496820  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:17.507770  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512284  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512360  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.518185  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:17.530028  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:17.541213  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546412  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546492  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.552969  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:17.565000  153366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:17.570123  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:17.576431  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:17.582447  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:17.588686  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:17.595338  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:17.601487  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:17.607923  153366 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:17.608035  153366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:17.608125  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.647040  153366 cri.go:89] found id: ""
	I0826 12:10:17.647140  153366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:17.657597  153366 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:17.657623  153366 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:17.657696  153366 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:17.667949  153366 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:17.669056  153366 kubeconfig.go:125] found "default-k8s-diff-port-697869" server: "https://192.168.61.11:8444"
	I0826 12:10:17.671281  153366 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:17.680798  153366 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I0826 12:10:17.680847  153366 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:17.680862  153366 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:17.680921  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.718772  153366 cri.go:89] found id: ""
	I0826 12:10:17.718890  153366 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:17.737115  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:17.747272  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:17.747300  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:17.747365  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:10:17.757172  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:17.757253  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:17.767325  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:10:17.779947  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:17.780022  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:17.789867  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.799532  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:17.799614  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.812714  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:10:17.825162  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:17.825246  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:17.838058  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:17.855348  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:17.976993  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:18.821196  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.025876  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.104571  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.198607  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:19.198729  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.698978  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.198987  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.246044  153366 api_server.go:72] duration metric: took 1.047451922s to wait for apiserver process to appear ...
	I0826 12:10:20.246077  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:20.246102  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:20.246682  153366 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0826 12:10:20.747149  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:17.129856  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:17.629845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.129411  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.629780  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.129980  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.629521  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.129719  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.630349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.130078  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.629658  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.704382  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:20.705290  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:22.705625  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:21.379594  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:21.380141  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:21.380174  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:21.380054  154342 retry.go:31] will retry after 2.588125482s: waiting for machine to come up
	I0826 12:10:23.969901  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:23.970463  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:23.970492  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:23.970429  154342 retry.go:31] will retry after 2.959609618s: waiting for machine to come up
	I0826 12:10:22.736733  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.736773  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.736792  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.767927  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.767978  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.767998  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.815605  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:22.815647  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.247226  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.265036  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.265070  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.746536  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.761050  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.761087  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.246584  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.256796  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.256832  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.746370  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.751618  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.751659  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.246161  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.250242  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.250271  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.746903  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.751494  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.751522  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:26.246579  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:26.251290  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:10:26.257484  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:26.257519  153366 api_server.go:131] duration metric: took 6.01143401s to wait for apiserver health ...
	I0826 12:10:26.257529  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:26.257536  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:26.259498  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:22.130431  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:22.630197  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.129672  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.630044  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.129562  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.629554  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.129334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.630351  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.130136  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.629461  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.203975  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.704731  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:26.932057  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:26.932632  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:26.932665  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:26.932547  154342 retry.go:31] will retry after 3.538498107s: waiting for machine to come up
	I0826 12:10:26.260852  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:26.271312  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:26.290104  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:26.299800  153366 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:26.299843  153366 system_pods.go:61] "coredns-6f6b679f8f-d5f9l" [7761358c-70cb-40e1-98c2-322335e33053] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:26.299852  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [877bd1a3-67e5-4208-96f7-242f6a6edd76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:26.299858  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [e2d33714-bff0-480b-9619-ed28f0fbbbe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:26.299868  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [f858c23a-d87e-4f1e-bffa-0bdd8ded996f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:26.299872  153366 system_pods.go:61] "kube-proxy-lvsx9" [12112756-81ed-415f-9033-cb9effdd20ee] Running
	I0826 12:10:26.299880  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [d8991013-f5ee-4df3-b48a-d6546417999a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:26.299885  153366 system_pods.go:61] "metrics-server-6867b74b74-spxx8" [1d5d9b1e-05f3-4b59-98a8-8d8f127be3c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:26.299889  153366 system_pods.go:61] "storage-provisioner" [ac2ac441-92f0-467a-a0da-fe4b8e4da50c] Running
	I0826 12:10:26.299896  153366 system_pods.go:74] duration metric: took 9.758032ms to wait for pod list to return data ...
	I0826 12:10:26.299903  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:26.303810  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:26.303848  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:26.303865  153366 node_conditions.go:105] duration metric: took 3.956287ms to run NodePressure ...
	I0826 12:10:26.303888  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:26.568053  153366 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573755  153366 kubeadm.go:739] kubelet initialised
	I0826 12:10:26.573793  153366 kubeadm.go:740] duration metric: took 5.692563ms waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573810  153366 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:26.580178  153366 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:28.585940  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.587027  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.129634  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:27.629356  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.130029  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.629993  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.130030  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.629424  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.129476  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.630209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.129435  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.630170  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.203373  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:32.204503  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.474603  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475145  152463 main.go:141] libmachine: (no-preload-956479) Found IP for machine: 192.168.50.213
	I0826 12:10:30.475172  152463 main.go:141] libmachine: (no-preload-956479) Reserving static IP address...
	I0826 12:10:30.475184  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has current primary IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475655  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.475688  152463 main.go:141] libmachine: (no-preload-956479) DBG | skip adding static IP to network mk-no-preload-956479 - found existing host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"}
	I0826 12:10:30.475705  152463 main.go:141] libmachine: (no-preload-956479) Reserved static IP address: 192.168.50.213
	I0826 12:10:30.475724  152463 main.go:141] libmachine: (no-preload-956479) Waiting for SSH to be available...
	I0826 12:10:30.475749  152463 main.go:141] libmachine: (no-preload-956479) DBG | Getting to WaitForSSH function...
	I0826 12:10:30.477762  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478222  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.478256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478323  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH client type: external
	I0826 12:10:30.478352  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa (-rw-------)
	I0826 12:10:30.478400  152463 main.go:141] libmachine: (no-preload-956479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:30.478423  152463 main.go:141] libmachine: (no-preload-956479) DBG | About to run SSH command:
	I0826 12:10:30.478431  152463 main.go:141] libmachine: (no-preload-956479) DBG | exit 0
	I0826 12:10:30.607143  152463 main.go:141] libmachine: (no-preload-956479) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:30.607526  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetConfigRaw
	I0826 12:10:30.608312  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.611028  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611425  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.611461  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611664  152463 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:10:30.611888  152463 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:30.611920  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:30.612166  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.614651  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615221  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.615253  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615430  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.615623  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615802  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615987  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.616182  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.616357  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.616367  152463 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:30.719178  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:30.719220  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719544  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:10:30.719577  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719829  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.722665  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723083  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.723112  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723299  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.723479  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723805  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.723965  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.724136  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.724154  152463 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956479 && echo "no-preload-956479" | sudo tee /etc/hostname
	I0826 12:10:30.844510  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956479
	
	I0826 12:10:30.844551  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.848147  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848601  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.848636  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848846  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.849053  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849234  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849371  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.849537  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.849711  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.849726  152463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956479/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:30.963743  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:30.963781  152463 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:30.963831  152463 buildroot.go:174] setting up certificates
	I0826 12:10:30.963844  152463 provision.go:84] configureAuth start
	I0826 12:10:30.963858  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.964223  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.967426  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.967922  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.967947  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.968210  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.970910  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971231  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.971268  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971381  152463 provision.go:143] copyHostCerts
	I0826 12:10:30.971439  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:30.971462  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:30.971515  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:30.971610  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:30.971620  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:30.971641  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:30.971695  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:30.971708  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:30.971726  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:30.971773  152463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.no-preload-956479 san=[127.0.0.1 192.168.50.213 localhost minikube no-preload-956479]
	I0826 12:10:31.209813  152463 provision.go:177] copyRemoteCerts
	I0826 12:10:31.209904  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:31.209939  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.213380  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.213880  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.213921  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.214161  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.214387  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.214543  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.214669  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.304972  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:31.332069  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:10:31.359526  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:31.387988  152463 provision.go:87] duration metric: took 424.128041ms to configureAuth
	I0826 12:10:31.388025  152463 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:31.388248  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:31.388342  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.392126  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392495  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.392527  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.393069  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393276  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393443  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.393636  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.393812  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.393830  152463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:31.673101  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:31.673134  152463 machine.go:96] duration metric: took 1.061231135s to provisionDockerMachine
	I0826 12:10:31.673147  152463 start.go:293] postStartSetup for "no-preload-956479" (driver="kvm2")
	I0826 12:10:31.673157  152463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:31.673173  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.673523  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:31.673556  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.676692  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677097  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.677142  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677349  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.677558  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.677702  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.677822  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.757940  152463 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:31.762636  152463 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:31.762668  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:31.762755  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:31.762887  152463 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:31.763005  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:31.773596  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:31.805712  152463 start.go:296] duration metric: took 132.547938ms for postStartSetup
	I0826 12:10:31.805772  152463 fix.go:56] duration metric: took 20.314170869s for fixHost
	I0826 12:10:31.805799  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.809143  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809503  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.809539  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.810034  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810552  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.810714  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.810950  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.810964  152463 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:31.919562  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674231.878777554
	
	I0826 12:10:31.919593  152463 fix.go:216] guest clock: 1724674231.878777554
	I0826 12:10:31.919605  152463 fix.go:229] Guest: 2024-08-26 12:10:31.878777554 +0000 UTC Remote: 2024-08-26 12:10:31.805776925 +0000 UTC m=+357.093278934 (delta=73.000629ms)
	I0826 12:10:31.919635  152463 fix.go:200] guest clock delta is within tolerance: 73.000629ms
	I0826 12:10:31.919653  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 20.428086051s
	I0826 12:10:31.919683  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.919994  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:31.922926  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923273  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.923305  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923492  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924019  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924217  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924314  152463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:31.924361  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.924462  152463 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:31.924485  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.927256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927510  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927697  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927724  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927869  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.927977  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.928076  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928245  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.928265  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928507  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.928547  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928816  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:32.013240  152463 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:32.047898  152463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:32.200554  152463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:32.207077  152463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:32.207149  152463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:32.223842  152463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:32.223869  152463 start.go:495] detecting cgroup driver to use...
	I0826 12:10:32.223931  152463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:32.241232  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:32.256522  152463 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:32.256594  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:32.271203  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:32.286062  152463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:32.422959  152463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:32.596450  152463 docker.go:233] disabling docker service ...
	I0826 12:10:32.596518  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:32.610684  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:32.624456  152463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:32.754300  152463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:32.880447  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:32.895761  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:32.915507  152463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:32.915579  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.926244  152463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:32.926323  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.936322  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.947292  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.958349  152463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:32.969332  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.981643  152463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.003757  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.014520  152463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:33.024134  152463 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:33.024220  152463 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:33.036667  152463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:33.046675  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:33.166681  152463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:33.314047  152463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:33.314136  152463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:33.319922  152463 start.go:563] Will wait 60s for crictl version
	I0826 12:10:33.320002  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.323747  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:33.363172  152463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:33.363268  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.391607  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.422180  152463 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:33.423515  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:33.426749  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427279  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:33.427316  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427559  152463 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:33.431826  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:33.443984  152463 kubeadm.go:883] updating cluster {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:33.444119  152463 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:33.444160  152463 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:33.478886  152463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:33.478919  152463 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:10:33.478977  152463 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.478997  152463 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.479029  152463 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.479079  152463 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 12:10:33.479002  152463 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.479095  152463 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.479153  152463 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.479157  152463 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480618  152463 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.480616  152463 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.480650  152463 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.480654  152463 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480623  152463 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.480628  152463 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.480629  152463 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.480763  152463 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0826 12:10:33.713473  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0826 12:10:33.725267  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.737490  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.787737  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.801836  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.807734  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.873480  152463 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0826 12:10:33.873546  152463 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.873617  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.873493  152463 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0826 12:10:33.873741  152463 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.873772  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.889641  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.921098  152463 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0826 12:10:33.921226  152463 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.921326  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.921170  152463 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0826 12:10:33.921463  152463 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.921499  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.930650  152463 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0826 12:10:33.930702  152463 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.930720  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.930738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.930743  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.973954  152463 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0826 12:10:33.974005  152463 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.974042  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.974059  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.974053  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:34.013541  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.013571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.013542  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.053966  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.053985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.068414  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.116750  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.116778  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.164943  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.172957  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.204571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.230985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.236650  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.270826  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0826 12:10:34.270990  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.304050  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0826 12:10:34.304147  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:34.308251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0826 12:10:34.308374  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:34.335314  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.348389  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.351251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0826 12:10:34.351376  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:34.359812  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0826 12:10:34.359842  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0826 12:10:34.359863  152463 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.359891  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0826 12:10:34.359921  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0826 12:10:34.359948  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:34.359952  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.400500  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0826 12:10:34.400644  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:34.428715  152463 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0826 12:10:34.428758  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0826 12:10:34.428776  152463 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.428802  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0826 12:10:34.428855  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:31.586509  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:31.586539  153366 pod_ready.go:82] duration metric: took 5.006322441s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:31.586549  153366 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:33.593060  153366 pod_ready.go:103] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:34.092728  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:34.092762  153366 pod_ready.go:82] duration metric: took 2.506204888s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:34.092775  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:32.130190  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:32.630331  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.129323  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.629368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.129667  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.629421  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.130330  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.630142  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.130340  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.629400  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.205203  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.704302  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.449383  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.089320181s)
	I0826 12:10:36.449436  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0826 12:10:36.449447  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.048765538s)
	I0826 12:10:36.449467  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449481  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0826 12:10:36.449509  152463 ssh_runner.go:235] Completed: which crictl: (2.020634497s)
	I0826 12:10:36.449536  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449568  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.427527  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.977941403s)
	I0826 12:10:38.427585  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0826 12:10:38.427610  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427529  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.977935335s)
	I0826 12:10:38.427668  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.466259  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:36.100135  153366 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.100269  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.100296  153366 pod_ready.go:82] duration metric: took 3.007513255s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.100308  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105634  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.105658  153366 pod_ready.go:82] duration metric: took 5.341415ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105668  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110911  153366 pod_ready.go:93] pod "kube-proxy-lvsx9" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.110939  153366 pod_ready.go:82] duration metric: took 5.263436ms for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110950  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115725  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.115752  153366 pod_ready.go:82] duration metric: took 4.79279ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115765  153366 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:39.122469  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.130309  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:37.629548  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.129413  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.629384  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.130354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.629474  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.129901  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.629362  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.129862  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.629811  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.704541  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.704598  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.705026  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.616557  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.188857601s)
	I0826 12:10:40.616588  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0826 12:10:40.616614  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616634  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.150337121s)
	I0826 12:10:40.616669  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616769  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0826 12:10:40.616885  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:42.472543  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.855842642s)
	I0826 12:10:42.472583  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0826 12:10:42.472586  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.855677168s)
	I0826 12:10:42.472620  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0826 12:10:42.472625  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:42.472702  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:44.419974  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.947236189s)
	I0826 12:10:44.420011  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0826 12:10:44.420041  152463 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:44.420097  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:41.122741  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:43.123416  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:45.623931  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.130334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:42.630068  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.130212  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.629443  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.130067  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.629805  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.129753  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.629806  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.129401  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.630125  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.203266  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.205125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:48.038017  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.617897174s)
	I0826 12:10:48.038048  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0826 12:10:48.038073  152463 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.038114  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.693199  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0826 12:10:48.693251  152463 cache_images.go:123] Successfully loaded all cached images
	I0826 12:10:48.693259  152463 cache_images.go:92] duration metric: took 15.214324574s to LoadCachedImages
	I0826 12:10:48.693274  152463 kubeadm.go:934] updating node { 192.168.50.213 8443 v1.31.0 crio true true} ...
	I0826 12:10:48.693392  152463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:48.693481  152463 ssh_runner.go:195] Run: crio config
	I0826 12:10:48.748151  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:48.748176  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:48.748185  152463 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:48.748210  152463 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956479 NodeName:no-preload-956479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:48.748387  152463 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956479"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:48.748458  152463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:48.759020  152463 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:48.759097  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:48.768345  152463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0826 12:10:48.784233  152463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:48.800236  152463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0826 12:10:48.819243  152463 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:48.823154  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:48.835973  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:48.959506  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:48.977413  152463 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479 for IP: 192.168.50.213
	I0826 12:10:48.977437  152463 certs.go:194] generating shared ca certs ...
	I0826 12:10:48.977458  152463 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:48.977653  152463 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:48.977714  152463 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:48.977725  152463 certs.go:256] generating profile certs ...
	I0826 12:10:48.977827  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.key
	I0826 12:10:48.977903  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key.5be91d7c
	I0826 12:10:48.977952  152463 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key
	I0826 12:10:48.978094  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:48.978136  152463 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:48.978149  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:48.978183  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:48.978221  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:48.978252  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:48.978305  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:48.978975  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:49.029725  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:49.077908  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:49.112813  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:49.157768  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 12:10:49.201804  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:49.228271  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:49.256770  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:49.283073  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:49.316360  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:49.342284  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:49.368126  152463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:49.386334  152463 ssh_runner.go:195] Run: openssl version
	I0826 12:10:49.392457  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:49.404815  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410087  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410160  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.416900  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:49.429893  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:49.442796  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448216  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448291  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.454416  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:49.466241  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:49.477636  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482106  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482193  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.488191  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:49.499538  152463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:49.504332  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:49.510908  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:49.517549  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:49.524925  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:49.531451  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:49.537617  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:49.543680  152463 kubeadm.go:392] StartCluster: {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:49.543776  152463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:49.543843  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.587049  152463 cri.go:89] found id: ""
	I0826 12:10:49.587142  152463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:49.597911  152463 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:49.597936  152463 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:49.598001  152463 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:49.607974  152463 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:49.608976  152463 kubeconfig.go:125] found "no-preload-956479" server: "https://192.168.50.213:8443"
	I0826 12:10:49.611217  152463 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:49.622647  152463 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I0826 12:10:49.622689  152463 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:49.622706  152463 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:49.623002  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.662463  152463 cri.go:89] found id: ""
	I0826 12:10:49.662549  152463 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:49.681134  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:49.691425  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:49.691452  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:49.691512  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:49.701389  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:49.701474  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:49.713195  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:49.722708  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:49.722792  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:49.732905  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:49.742726  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:49.742814  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:48.123021  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:50.123270  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.129441  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:47.629637  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.129381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.630027  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.129789  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.630022  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.130252  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.630145  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.129544  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.629646  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.704947  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:51.705172  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:49.752415  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:49.761573  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:49.761667  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:49.771209  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:49.781057  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:49.889287  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.424782  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.640186  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.713706  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.834409  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:50.834516  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.335630  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.834665  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.851569  152463 api_server.go:72] duration metric: took 1.01717469s to wait for apiserver process to appear ...
	I0826 12:10:51.851601  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:51.851626  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:51.852167  152463 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0826 12:10:52.351709  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.441177  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.441210  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:54.441223  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.451907  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.451937  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:52.623200  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.122552  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:54.852737  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.857641  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:54.857740  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.351825  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.356325  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:55.356364  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.851867  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.858081  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:10:55.865811  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:55.865843  152463 api_server.go:131] duration metric: took 4.014234103s to wait for apiserver health ...
	I0826 12:10:55.865853  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:55.865861  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:55.867818  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:52.129473  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:52.629868  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.129585  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.629893  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.129446  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.629722  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.130173  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.629968  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.129994  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.629422  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.203474  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:56.204271  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.869434  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:55.881376  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:55.935418  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:55.955678  152463 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:55.955721  152463 system_pods.go:61] "coredns-6f6b679f8f-s9685" [b6fca294-8a78-4f7c-a466-11c76362874a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:55.955732  152463 system_pods.go:61] "etcd-no-preload-956479" [96da9402-8ea6-4418-892d-7691ab60a10d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:55.955744  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [5fe3eb03-a50c-4a7b-8c50-37262f1b165f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:55.955752  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [362950c9-4466-413e-8248-053fe4d698a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:55.955759  152463 system_pods.go:61] "kube-proxy-kwpqw" [023fc9f9-538e-43d0-a484-e2f4c75c7f34] Running
	I0826 12:10:55.955769  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [d24580b2-8a37-4aaa-8d9d-66f9eb3e0c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:55.955777  152463 system_pods.go:61] "metrics-server-6867b74b74-ldgsl" [264e96c8-430f-40fc-bb9c-7588cc28bc6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:55.955787  152463 system_pods.go:61] "storage-provisioner" [de97d99d-eda7-4ae4-8051-2fc34a2fe630] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:10:55.955803  152463 system_pods.go:74] duration metric: took 20.359455ms to wait for pod list to return data ...
	I0826 12:10:55.955815  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:55.972694  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:55.972741  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:55.972756  152463 node_conditions.go:105] duration metric: took 16.934705ms to run NodePressure ...
	I0826 12:10:55.972781  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:56.283383  152463 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288149  152463 kubeadm.go:739] kubelet initialised
	I0826 12:10:56.288173  152463 kubeadm.go:740] duration metric: took 4.75919ms waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288183  152463 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:56.292852  152463 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.297832  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297858  152463 pod_ready.go:82] duration metric: took 4.980322ms for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.297868  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297876  152463 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.302936  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302971  152463 pod_ready.go:82] duration metric: took 5.08663ms for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.302987  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302995  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.313684  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313719  152463 pod_ready.go:82] duration metric: took 10.716576ms for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.313733  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313742  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.339570  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339604  152463 pod_ready.go:82] duration metric: took 25.849085ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.339613  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339620  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738759  152463 pod_ready.go:93] pod "kube-proxy-kwpqw" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:56.738786  152463 pod_ready.go:82] duration metric: took 399.156996ms for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738798  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:58.745103  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.623412  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.123226  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.129363  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:57.629878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.129406  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.629611  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.130209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.629354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.130004  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.629599  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.129324  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.629623  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.705336  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:01.206112  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.746646  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.748453  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.623495  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:04.623650  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.129756  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:02.630078  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:02.630168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:02.668634  152982 cri.go:89] found id: ""
	I0826 12:11:02.668665  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.668673  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:02.668680  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:02.668736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:02.707481  152982 cri.go:89] found id: ""
	I0826 12:11:02.707513  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.707524  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:02.707531  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:02.707600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:02.742412  152982 cri.go:89] found id: ""
	I0826 12:11:02.742441  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.742452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:02.742459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:02.742524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:02.783334  152982 cri.go:89] found id: ""
	I0826 12:11:02.783363  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.783374  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:02.783383  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:02.783442  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:02.819550  152982 cri.go:89] found id: ""
	I0826 12:11:02.819578  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.819586  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:02.819592  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:02.819668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:02.857381  152982 cri.go:89] found id: ""
	I0826 12:11:02.857418  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.857429  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:02.857439  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:02.857508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:02.891198  152982 cri.go:89] found id: ""
	I0826 12:11:02.891231  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.891242  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:02.891249  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:02.891328  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:02.925819  152982 cri.go:89] found id: ""
	I0826 12:11:02.925847  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.925856  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:02.925867  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:02.925881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:03.061241  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:03.061287  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:03.061306  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:03.132324  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:03.132364  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:03.176590  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:03.176623  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.229320  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:03.229366  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:05.744686  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:05.758429  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:05.758517  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:05.799162  152982 cri.go:89] found id: ""
	I0826 12:11:05.799200  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.799209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:05.799216  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:05.799270  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:05.839302  152982 cri.go:89] found id: ""
	I0826 12:11:05.839341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.839354  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:05.839362  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:05.839438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:05.900064  152982 cri.go:89] found id: ""
	I0826 12:11:05.900094  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.900102  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:05.900108  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:05.900168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:05.938314  152982 cri.go:89] found id: ""
	I0826 12:11:05.938341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.938350  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:05.938356  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:05.938423  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:05.975644  152982 cri.go:89] found id: ""
	I0826 12:11:05.975679  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.975691  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:05.975699  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:05.975775  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:06.012720  152982 cri.go:89] found id: ""
	I0826 12:11:06.012752  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.012764  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:06.012772  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:06.012848  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:06.048613  152982 cri.go:89] found id: ""
	I0826 12:11:06.048648  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.048656  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:06.048662  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:06.048717  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:06.083136  152982 cri.go:89] found id: ""
	I0826 12:11:06.083171  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.083183  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:06.083195  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:06.083213  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:06.096570  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:06.096616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:06.172561  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:06.172588  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:06.172605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:06.252039  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:06.252081  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:06.291076  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:06.291109  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.705538  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:06.203800  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:05.245839  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.744844  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.745230  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.123518  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.124421  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:08.838693  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:08.853160  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:08.853246  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:08.893024  152982 cri.go:89] found id: ""
	I0826 12:11:08.893058  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.893072  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:08.893083  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:08.893157  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:08.929621  152982 cri.go:89] found id: ""
	I0826 12:11:08.929660  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.929669  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:08.929675  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:08.929744  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:08.965488  152982 cri.go:89] found id: ""
	I0826 12:11:08.965526  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.965541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:08.965550  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:08.965622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:09.001467  152982 cri.go:89] found id: ""
	I0826 12:11:09.001503  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.001515  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:09.001525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:09.001587  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:09.037865  152982 cri.go:89] found id: ""
	I0826 12:11:09.037898  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.037907  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:09.037914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:09.037973  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:09.074537  152982 cri.go:89] found id: ""
	I0826 12:11:09.074571  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.074582  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:09.074591  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:09.074665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:09.111517  152982 cri.go:89] found id: ""
	I0826 12:11:09.111550  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.111561  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:09.111569  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:09.111635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:09.151005  152982 cri.go:89] found id: ""
	I0826 12:11:09.151039  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.151050  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:09.151062  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:09.151079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:09.231625  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:09.231666  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:09.277642  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:09.277685  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:09.326772  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:09.326814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:09.341764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:09.341802  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:09.419087  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:08.203869  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.206872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:12.703516  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.246459  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:11:10.246503  152463 pod_ready.go:82] duration metric: took 13.507695458s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:10.246520  152463 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:12.254439  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:14.752278  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.126604  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:13.622382  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:15.622915  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.920246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:11.933973  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:11.934070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:11.971020  152982 cri.go:89] found id: ""
	I0826 12:11:11.971055  152982 logs.go:276] 0 containers: []
	W0826 12:11:11.971067  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:11.971076  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:11.971147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:12.005639  152982 cri.go:89] found id: ""
	I0826 12:11:12.005669  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.005679  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:12.005687  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:12.005757  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:12.039823  152982 cri.go:89] found id: ""
	I0826 12:11:12.039856  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.039868  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:12.039877  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:12.039954  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:12.075646  152982 cri.go:89] found id: ""
	I0826 12:11:12.075690  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.075702  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:12.075710  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:12.075814  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:12.113810  152982 cri.go:89] found id: ""
	I0826 12:11:12.113838  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.113846  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:12.113852  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:12.113927  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:12.150373  152982 cri.go:89] found id: ""
	I0826 12:11:12.150405  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.150415  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:12.150421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:12.150478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:12.186325  152982 cri.go:89] found id: ""
	I0826 12:11:12.186362  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.186373  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:12.186381  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:12.186444  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:12.221346  152982 cri.go:89] found id: ""
	I0826 12:11:12.221380  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.221392  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:12.221405  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:12.221423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:12.279964  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:12.280006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:12.297102  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:12.297134  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:12.391568  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:12.391593  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:12.391608  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:12.472218  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:12.472259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.012974  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:15.026480  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:15.026553  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:15.060748  152982 cri.go:89] found id: ""
	I0826 12:11:15.060779  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.060787  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:15.060792  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:15.060842  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:15.095611  152982 cri.go:89] found id: ""
	I0826 12:11:15.095644  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.095668  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:15.095683  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:15.095759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:15.130644  152982 cri.go:89] found id: ""
	I0826 12:11:15.130681  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.130692  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:15.130700  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:15.130773  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:15.164343  152982 cri.go:89] found id: ""
	I0826 12:11:15.164375  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.164383  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:15.164391  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:15.164468  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:15.203801  152982 cri.go:89] found id: ""
	I0826 12:11:15.203835  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.203847  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:15.203855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:15.203935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:15.236428  152982 cri.go:89] found id: ""
	I0826 12:11:15.236455  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.236465  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:15.236474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:15.236546  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:15.271307  152982 cri.go:89] found id: ""
	I0826 12:11:15.271345  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.271357  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:15.271365  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:15.271449  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:15.306164  152982 cri.go:89] found id: ""
	I0826 12:11:15.306194  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.306203  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:15.306214  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:15.306228  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:15.319277  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:15.319311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:15.389821  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:15.389853  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:15.389874  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:15.466002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:15.466045  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.506591  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:15.506626  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:14.703938  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.704084  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.753630  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:19.252388  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.124351  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:20.621827  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.061033  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:18.084401  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:18.084478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:18.127327  152982 cri.go:89] found id: ""
	I0826 12:11:18.127360  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.127371  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:18.127380  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:18.127451  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:18.163215  152982 cri.go:89] found id: ""
	I0826 12:11:18.163249  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.163261  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:18.163270  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:18.163330  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:18.198205  152982 cri.go:89] found id: ""
	I0826 12:11:18.198232  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.198241  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:18.198250  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:18.198322  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:18.233245  152982 cri.go:89] found id: ""
	I0826 12:11:18.233279  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.233291  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:18.233299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:18.233366  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:18.266761  152982 cri.go:89] found id: ""
	I0826 12:11:18.266802  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.266825  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:18.266855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:18.266932  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:18.301705  152982 cri.go:89] found id: ""
	I0826 12:11:18.301744  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.301755  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:18.301764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:18.301825  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:18.339384  152982 cri.go:89] found id: ""
	I0826 12:11:18.339413  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.339422  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:18.339428  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:18.339486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:18.374435  152982 cri.go:89] found id: ""
	I0826 12:11:18.374467  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.374475  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:18.374485  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:18.374498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:18.414453  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:18.414506  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:18.468667  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:18.468712  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:18.483366  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:18.483399  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:18.554900  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:18.554930  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:18.554948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.135828  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:21.148610  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:21.148690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:21.184455  152982 cri.go:89] found id: ""
	I0826 12:11:21.184484  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.184494  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:21.184503  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:21.184572  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:21.219762  152982 cri.go:89] found id: ""
	I0826 12:11:21.219808  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.219821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:21.219829  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:21.219914  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:21.258106  152982 cri.go:89] found id: ""
	I0826 12:11:21.258136  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.258147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:21.258154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:21.258221  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:21.293698  152982 cri.go:89] found id: ""
	I0826 12:11:21.293741  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.293753  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:21.293764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:21.293841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:21.328069  152982 cri.go:89] found id: ""
	I0826 12:11:21.328101  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.328115  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:21.328123  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:21.328191  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:21.363723  152982 cri.go:89] found id: ""
	I0826 12:11:21.363757  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.363767  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:21.363776  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:21.363843  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:21.398321  152982 cri.go:89] found id: ""
	I0826 12:11:21.398349  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.398358  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:21.398364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:21.398428  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:21.434139  152982 cri.go:89] found id: ""
	I0826 12:11:21.434169  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.434177  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:21.434189  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:21.434211  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:21.488855  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:21.488900  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:21.503146  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:21.503186  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:21.576190  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:21.576212  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:21.576226  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.660280  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:21.660330  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:19.203558  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.704020  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.254119  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:23.752986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:22.622972  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.623227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.205285  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:24.219929  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:24.220056  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:24.263032  152982 cri.go:89] found id: ""
	I0826 12:11:24.263064  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.263076  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:24.263084  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:24.263154  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:24.301435  152982 cri.go:89] found id: ""
	I0826 12:11:24.301469  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.301479  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:24.301486  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:24.301557  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:24.337463  152982 cri.go:89] found id: ""
	I0826 12:11:24.337494  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.337505  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:24.337513  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:24.337589  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:24.375142  152982 cri.go:89] found id: ""
	I0826 12:11:24.375181  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.375192  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:24.375201  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:24.375277  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:24.414859  152982 cri.go:89] found id: ""
	I0826 12:11:24.414891  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.414902  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:24.414910  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:24.414980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:24.453757  152982 cri.go:89] found id: ""
	I0826 12:11:24.453801  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.453826  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:24.453836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:24.453936  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:24.489558  152982 cri.go:89] found id: ""
	I0826 12:11:24.489592  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.489601  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:24.489606  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:24.489659  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:24.525054  152982 cri.go:89] found id: ""
	I0826 12:11:24.525086  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.525097  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:24.525109  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:24.525131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:24.596120  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:24.596147  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:24.596162  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:24.671993  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:24.672040  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:24.714108  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:24.714139  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:24.764937  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:24.764979  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:23.704101  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.704765  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.759905  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:28.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.121723  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:29.122568  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.280105  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:27.293479  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:27.293569  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:27.335432  152982 cri.go:89] found id: ""
	I0826 12:11:27.335464  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.335477  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:27.335485  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:27.335565  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:27.371729  152982 cri.go:89] found id: ""
	I0826 12:11:27.371763  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.371774  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:27.371783  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:27.371857  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:27.408210  152982 cri.go:89] found id: ""
	I0826 12:11:27.408238  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.408250  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:27.408258  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:27.408327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:27.442135  152982 cri.go:89] found id: ""
	I0826 12:11:27.442170  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.442186  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:27.442196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:27.442266  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:27.476245  152982 cri.go:89] found id: ""
	I0826 12:11:27.476279  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.476290  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:27.476299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:27.476421  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:27.510917  152982 cri.go:89] found id: ""
	I0826 12:11:27.510949  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.510958  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:27.510965  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:27.511033  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:27.552891  152982 cri.go:89] found id: ""
	I0826 12:11:27.552925  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.552933  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:27.552939  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:27.552996  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:27.588303  152982 cri.go:89] found id: ""
	I0826 12:11:27.588339  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.588354  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:27.588365  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:27.588383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:27.666493  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:27.666540  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:27.710139  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:27.710176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:27.761327  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:27.761368  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:27.775628  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:27.775667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:27.851736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.351953  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:30.365614  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:30.365705  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:30.400100  152982 cri.go:89] found id: ""
	I0826 12:11:30.400130  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.400140  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:30.400148  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:30.400224  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:30.433714  152982 cri.go:89] found id: ""
	I0826 12:11:30.433746  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.433762  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:30.433770  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:30.433841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:30.467434  152982 cri.go:89] found id: ""
	I0826 12:11:30.467465  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.467475  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:30.467482  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:30.467549  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:30.501079  152982 cri.go:89] found id: ""
	I0826 12:11:30.501115  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.501128  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:30.501136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:30.501195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:30.536521  152982 cri.go:89] found id: ""
	I0826 12:11:30.536556  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.536568  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:30.536576  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:30.536649  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:30.572998  152982 cri.go:89] found id: ""
	I0826 12:11:30.573030  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.573040  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:30.573048  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:30.573116  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:30.608982  152982 cri.go:89] found id: ""
	I0826 12:11:30.609017  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.609028  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:30.609035  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:30.609110  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:30.648780  152982 cri.go:89] found id: ""
	I0826 12:11:30.648812  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.648824  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:30.648837  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:30.648853  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:30.705822  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:30.705859  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:30.719927  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:30.719956  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:30.799604  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.799633  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:30.799650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:30.876392  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:30.876438  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:28.203982  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.204105  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.703547  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.255268  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.753846  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:31.622470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.623169  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.417878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:33.431323  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:33.431416  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:33.466166  152982 cri.go:89] found id: ""
	I0826 12:11:33.466195  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.466204  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:33.466215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:33.466292  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:33.504322  152982 cri.go:89] found id: ""
	I0826 12:11:33.504351  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.504360  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:33.504367  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:33.504429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:33.542292  152982 cri.go:89] found id: ""
	I0826 12:11:33.542324  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.542332  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:33.542340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:33.542408  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:33.577794  152982 cri.go:89] found id: ""
	I0826 12:11:33.577827  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.577835  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:33.577841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:33.577901  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:33.611525  152982 cri.go:89] found id: ""
	I0826 12:11:33.611561  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.611571  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:33.611580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:33.611661  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:33.650920  152982 cri.go:89] found id: ""
	I0826 12:11:33.650954  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.650966  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:33.650974  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:33.651043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:33.688349  152982 cri.go:89] found id: ""
	I0826 12:11:33.688389  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.688401  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:33.688409  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:33.688479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:33.726501  152982 cri.go:89] found id: ""
	I0826 12:11:33.726533  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.726542  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:33.726553  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:33.726570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:33.740359  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:33.740392  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:33.810992  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:33.811018  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:33.811030  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:33.895742  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:33.895786  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:33.934059  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:33.934090  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.490917  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:36.503916  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:36.504000  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:36.539493  152982 cri.go:89] found id: ""
	I0826 12:11:36.539521  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.539529  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:36.539535  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:36.539597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:36.575517  152982 cri.go:89] found id: ""
	I0826 12:11:36.575556  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.575567  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:36.575576  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:36.575647  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:36.611750  152982 cri.go:89] found id: ""
	I0826 12:11:36.611790  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.611803  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:36.611812  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:36.611880  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:36.649512  152982 cri.go:89] found id: ""
	I0826 12:11:36.649548  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.649561  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:36.649575  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:36.649656  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:36.686741  152982 cri.go:89] found id: ""
	I0826 12:11:36.686774  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.686784  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:36.686791  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:36.686879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:35.204399  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:37.206473  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:34.753931  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.754270  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:39.253118  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.122628  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:38.122940  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:40.623071  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.723395  152982 cri.go:89] found id: ""
	I0826 12:11:36.723423  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.723431  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:36.723438  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:36.723503  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:36.761858  152982 cri.go:89] found id: ""
	I0826 12:11:36.761895  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.761906  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:36.761914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:36.761987  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:36.797265  152982 cri.go:89] found id: ""
	I0826 12:11:36.797297  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.797305  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:36.797315  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:36.797331  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.849263  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:36.849313  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:36.863273  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:36.863305  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:36.935214  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:36.935241  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:36.935259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:37.011799  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:37.011845  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.550075  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:39.563363  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:39.563441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:39.597015  152982 cri.go:89] found id: ""
	I0826 12:11:39.597049  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.597061  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:39.597068  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:39.597138  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:39.634936  152982 cri.go:89] found id: ""
	I0826 12:11:39.634976  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.634988  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:39.634996  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:39.635070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:39.670376  152982 cri.go:89] found id: ""
	I0826 12:11:39.670406  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.670414  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:39.670421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:39.670479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:39.706468  152982 cri.go:89] found id: ""
	I0826 12:11:39.706497  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.706504  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:39.706510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:39.706601  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:39.741133  152982 cri.go:89] found id: ""
	I0826 12:11:39.741166  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.741178  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:39.741187  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:39.741261  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:39.776398  152982 cri.go:89] found id: ""
	I0826 12:11:39.776436  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.776449  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:39.776460  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:39.776533  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:39.811257  152982 cri.go:89] found id: ""
	I0826 12:11:39.811291  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.811305  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:39.811314  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:39.811394  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:39.845825  152982 cri.go:89] found id: ""
	I0826 12:11:39.845858  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.845880  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:39.845893  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:39.845912  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.886439  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:39.886481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:39.936942  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:39.936985  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:39.950459  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:39.950494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:40.022791  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:40.022820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:40.022851  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:39.705276  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.705617  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.253680  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.753495  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.122503  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:45.123917  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:42.602146  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:42.615049  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:42.615124  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:42.655338  152982 cri.go:89] found id: ""
	I0826 12:11:42.655369  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.655377  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:42.655383  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:42.655438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:42.692964  152982 cri.go:89] found id: ""
	I0826 12:11:42.693001  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.693012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:42.693020  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:42.693095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:42.730011  152982 cri.go:89] found id: ""
	I0826 12:11:42.730040  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.730049  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:42.730055  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:42.730119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:42.765304  152982 cri.go:89] found id: ""
	I0826 12:11:42.765333  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.765341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:42.765348  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:42.765406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:42.805860  152982 cri.go:89] found id: ""
	I0826 12:11:42.805900  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.805912  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:42.805921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:42.805984  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:42.844736  152982 cri.go:89] found id: ""
	I0826 12:11:42.844768  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.844779  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:42.844789  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:42.844855  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:42.879760  152982 cri.go:89] found id: ""
	I0826 12:11:42.879790  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.879801  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:42.879809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:42.879873  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:42.918512  152982 cri.go:89] found id: ""
	I0826 12:11:42.918580  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.918595  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:42.918619  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:42.918640  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:42.971381  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:42.971423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:42.986027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:42.986069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:43.058511  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:43.058533  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:43.058548  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:43.137904  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:43.137948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:45.683127  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:45.697237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:45.697323  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:45.737944  152982 cri.go:89] found id: ""
	I0826 12:11:45.737977  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.737989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:45.737997  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:45.738069  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:45.775940  152982 cri.go:89] found id: ""
	I0826 12:11:45.775972  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.775980  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:45.775991  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:45.776047  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:45.811609  152982 cri.go:89] found id: ""
	I0826 12:11:45.811647  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.811658  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:45.811666  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:45.811747  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:45.845566  152982 cri.go:89] found id: ""
	I0826 12:11:45.845600  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.845612  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:45.845620  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:45.845698  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:45.880243  152982 cri.go:89] found id: ""
	I0826 12:11:45.880287  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.880300  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:45.880310  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:45.880406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:45.916121  152982 cri.go:89] found id: ""
	I0826 12:11:45.916150  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.916161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:45.916170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:45.916242  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:45.950397  152982 cri.go:89] found id: ""
	I0826 12:11:45.950430  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.950441  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:45.950449  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:45.950524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:45.987306  152982 cri.go:89] found id: ""
	I0826 12:11:45.987350  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.987363  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:45.987394  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:45.987435  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:46.044580  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:46.044632  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:46.059612  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:46.059648  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:46.133348  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:46.133377  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:46.133396  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:46.217841  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:46.217890  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:44.203535  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.703738  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.252936  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.753329  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:47.623134  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:49.628072  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.758749  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:48.772086  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:48.772172  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:48.806520  152982 cri.go:89] found id: ""
	I0826 12:11:48.806552  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.806563  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:48.806573  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:48.806655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:48.844305  152982 cri.go:89] found id: ""
	I0826 12:11:48.844335  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.844343  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:48.844349  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:48.844409  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:48.882416  152982 cri.go:89] found id: ""
	I0826 12:11:48.882453  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.882462  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:48.882469  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:48.882523  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:48.917756  152982 cri.go:89] found id: ""
	I0826 12:11:48.917798  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.917811  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:48.917818  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:48.917882  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:48.951065  152982 cri.go:89] found id: ""
	I0826 12:11:48.951095  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.951107  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:48.951115  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:48.951185  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:48.984812  152982 cri.go:89] found id: ""
	I0826 12:11:48.984845  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.984857  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:48.984865  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:48.984935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:49.021449  152982 cri.go:89] found id: ""
	I0826 12:11:49.021483  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.021495  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:49.021505  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:49.021579  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:49.053543  152982 cri.go:89] found id: ""
	I0826 12:11:49.053584  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.053596  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:49.053609  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:49.053625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:49.107227  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:49.107269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:49.121370  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:49.121402  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:49.192279  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:49.192323  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:49.192342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:49.267817  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:49.267861  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:49.204182  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.204589  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:50.753778  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.753986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.122110  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:54.122701  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.805801  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:51.821042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:51.821119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:51.863950  152982 cri.go:89] found id: ""
	I0826 12:11:51.863986  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.863999  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:51.864007  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:51.864082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:51.910582  152982 cri.go:89] found id: ""
	I0826 12:11:51.910621  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.910633  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:51.910649  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:51.910708  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:51.946964  152982 cri.go:89] found id: ""
	I0826 12:11:51.947001  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.947014  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:51.947022  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:51.947095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:51.982892  152982 cri.go:89] found id: ""
	I0826 12:11:51.982926  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.982936  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:51.982944  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:51.983016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:52.017975  152982 cri.go:89] found id: ""
	I0826 12:11:52.018000  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.018009  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:52.018015  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:52.018082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:52.053286  152982 cri.go:89] found id: ""
	I0826 12:11:52.053315  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.053323  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:52.053329  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:52.053391  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:52.088088  152982 cri.go:89] found id: ""
	I0826 12:11:52.088131  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.088144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:52.088153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:52.088235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:52.125911  152982 cri.go:89] found id: ""
	I0826 12:11:52.125938  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.125955  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:52.125967  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:52.125984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:52.167172  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:52.167208  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:52.222819  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:52.222871  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:52.237609  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:52.237650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:52.312439  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:52.312473  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:52.312491  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:54.892552  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:54.907733  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:54.907827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:54.945009  152982 cri.go:89] found id: ""
	I0826 12:11:54.945040  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.945050  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:54.945057  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:54.945128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:54.987578  152982 cri.go:89] found id: ""
	I0826 12:11:54.987608  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.987619  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:54.987627  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:54.987702  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:55.021222  152982 cri.go:89] found id: ""
	I0826 12:11:55.021254  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.021266  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:55.021274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:55.021348  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:55.058906  152982 cri.go:89] found id: ""
	I0826 12:11:55.058933  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.058941  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:55.058948  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:55.059017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:55.094689  152982 cri.go:89] found id: ""
	I0826 12:11:55.094720  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.094727  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:55.094734  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:55.094808  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:55.133269  152982 cri.go:89] found id: ""
	I0826 12:11:55.133298  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.133306  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:55.133313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:55.133376  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:55.170456  152982 cri.go:89] found id: ""
	I0826 12:11:55.170491  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.170501  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:55.170510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:55.170584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:55.205421  152982 cri.go:89] found id: ""
	I0826 12:11:55.205453  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.205463  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:55.205474  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:55.205490  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:55.258635  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:55.258672  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:55.272799  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:55.272838  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:55.345916  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:55.345948  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:55.345966  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:55.421677  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:55.421716  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:53.205479  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.703014  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.704352  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.254310  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.753129  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:56.124191  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:58.622612  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.960895  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:57.974338  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:57.974429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:58.010914  152982 cri.go:89] found id: ""
	I0826 12:11:58.010946  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.010955  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:58.010966  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:58.011046  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:58.046393  152982 cri.go:89] found id: ""
	I0826 12:11:58.046437  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.046451  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:58.046457  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:58.046512  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:58.081967  152982 cri.go:89] found id: ""
	I0826 12:11:58.081999  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.082008  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:58.082014  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:58.082074  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:58.118301  152982 cri.go:89] found id: ""
	I0826 12:11:58.118333  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.118344  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:58.118352  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:58.118420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:58.154991  152982 cri.go:89] found id: ""
	I0826 12:11:58.155022  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.155030  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:58.155036  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:58.155095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:58.192768  152982 cri.go:89] found id: ""
	I0826 12:11:58.192814  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.192827  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:58.192836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:58.192911  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:58.230393  152982 cri.go:89] found id: ""
	I0826 12:11:58.230422  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.230433  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:58.230441  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:58.230510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:58.267899  152982 cri.go:89] found id: ""
	I0826 12:11:58.267935  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.267947  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:58.267959  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:58.267976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:58.357819  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:58.357866  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:58.405641  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:58.405682  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:58.458403  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:58.458446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:58.472170  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:58.472209  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:58.544141  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.044595  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:01.059636  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:01.059732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:01.099210  152982 cri.go:89] found id: ""
	I0826 12:12:01.099244  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.099252  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:01.099260  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:01.099315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:01.135865  152982 cri.go:89] found id: ""
	I0826 12:12:01.135895  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.135904  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:01.135915  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:01.135969  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:01.169745  152982 cri.go:89] found id: ""
	I0826 12:12:01.169775  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.169784  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:01.169790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:01.169844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:01.208386  152982 cri.go:89] found id: ""
	I0826 12:12:01.208419  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.208431  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:01.208440  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:01.208508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:01.250695  152982 cri.go:89] found id: ""
	I0826 12:12:01.250727  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.250738  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:01.250746  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:01.250821  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:01.284796  152982 cri.go:89] found id: ""
	I0826 12:12:01.284825  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.284838  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:01.284845  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:01.284904  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:01.318188  152982 cri.go:89] found id: ""
	I0826 12:12:01.318219  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.318233  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:01.318242  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:01.318313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:01.354986  152982 cri.go:89] found id: ""
	I0826 12:12:01.355024  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.355036  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:01.355055  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:01.355073  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:01.406575  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:01.406625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:01.421246  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:01.421299  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:01.500127  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.500160  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:01.500178  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:01.579560  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:01.579605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:00.202896  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.204136  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:59.758855  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.253583  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:01.123695  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:03.622227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.124292  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:04.138317  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:04.138406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:04.172150  152982 cri.go:89] found id: ""
	I0826 12:12:04.172185  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.172197  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:04.172205  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:04.172281  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:04.206215  152982 cri.go:89] found id: ""
	I0826 12:12:04.206245  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.206253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:04.206259  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:04.206314  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:04.245728  152982 cri.go:89] found id: ""
	I0826 12:12:04.245766  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.245780  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:04.245797  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:04.245875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:04.288292  152982 cri.go:89] found id: ""
	I0826 12:12:04.288328  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.288341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:04.288358  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:04.288420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:04.323224  152982 cri.go:89] found id: ""
	I0826 12:12:04.323270  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.323279  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:04.323285  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:04.323353  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:04.356637  152982 cri.go:89] found id: ""
	I0826 12:12:04.356670  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.356681  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:04.356751  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:04.356829  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:04.397159  152982 cri.go:89] found id: ""
	I0826 12:12:04.397202  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.397217  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:04.397225  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:04.397307  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:04.443593  152982 cri.go:89] found id: ""
	I0826 12:12:04.443635  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.443644  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:04.443654  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:04.443667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:04.527790  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:04.527820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:04.527840  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:04.603384  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:04.603426  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:04.642782  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:04.642818  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:04.692196  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:04.692239  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:04.704890  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.204192  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.753969  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.253318  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:09.253759  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:06.123014  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:08.622705  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.208845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:07.221853  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:07.221925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:07.257184  152982 cri.go:89] found id: ""
	I0826 12:12:07.257220  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.257236  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:07.257244  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:07.257313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:07.289962  152982 cri.go:89] found id: ""
	I0826 12:12:07.290000  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.290012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:07.290018  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:07.290082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:07.323408  152982 cri.go:89] found id: ""
	I0826 12:12:07.323440  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.323452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:07.323461  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:07.323527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:07.358324  152982 cri.go:89] found id: ""
	I0826 12:12:07.358353  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.358362  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:07.358368  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:07.358436  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:07.393608  152982 cri.go:89] found id: ""
	I0826 12:12:07.393657  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.393666  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:07.393671  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:07.393739  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:07.427738  152982 cri.go:89] found id: ""
	I0826 12:12:07.427772  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.427782  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:07.427790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:07.427879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:07.466467  152982 cri.go:89] found id: ""
	I0826 12:12:07.466508  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.466520  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:07.466528  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:07.466603  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:07.501589  152982 cri.go:89] found id: ""
	I0826 12:12:07.501630  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.501645  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:07.501658  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:07.501678  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:07.550668  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:07.550708  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:07.564191  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:07.564224  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:07.638593  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:07.638626  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:07.638645  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:07.722262  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:07.722311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:10.265369  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:10.278719  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:10.278807  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:10.314533  152982 cri.go:89] found id: ""
	I0826 12:12:10.314568  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.314581  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:10.314589  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:10.314664  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:10.355983  152982 cri.go:89] found id: ""
	I0826 12:12:10.356014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.356023  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:10.356029  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:10.356091  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:10.391815  152982 cri.go:89] found id: ""
	I0826 12:12:10.391850  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.391860  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:10.391867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:10.391933  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:10.430280  152982 cri.go:89] found id: ""
	I0826 12:12:10.430309  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.430318  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:10.430324  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:10.430383  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:10.467983  152982 cri.go:89] found id: ""
	I0826 12:12:10.468014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.468025  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:10.468034  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:10.468103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:10.501682  152982 cri.go:89] found id: ""
	I0826 12:12:10.501712  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.501720  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:10.501726  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:10.501779  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:10.536760  152982 cri.go:89] found id: ""
	I0826 12:12:10.536790  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.536802  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:10.536810  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:10.536885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:10.572626  152982 cri.go:89] found id: ""
	I0826 12:12:10.572663  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.572677  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:10.572690  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:10.572707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:10.628207  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:10.628242  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:10.641767  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:10.641799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:10.716431  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:10.716463  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:10.716481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:10.801367  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:10.801416  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:09.205156  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.704152  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.754090  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:14.253111  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.122118  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.123368  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:15.623046  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.346625  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:13.359838  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:13.359925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:13.393199  152982 cri.go:89] found id: ""
	I0826 12:12:13.393228  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.393241  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:13.393249  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:13.393321  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:13.429651  152982 cri.go:89] found id: ""
	I0826 12:12:13.429696  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.429709  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:13.429718  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:13.429778  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:13.463913  152982 cri.go:89] found id: ""
	I0826 12:12:13.463947  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.463959  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:13.463967  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:13.464035  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:13.498933  152982 cri.go:89] found id: ""
	I0826 12:12:13.498966  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.498977  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:13.498987  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:13.499064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:13.535136  152982 cri.go:89] found id: ""
	I0826 12:12:13.535166  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.535177  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:13.535185  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:13.535260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:13.573468  152982 cri.go:89] found id: ""
	I0826 12:12:13.573504  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.573516  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:13.573525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:13.573597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:13.612852  152982 cri.go:89] found id: ""
	I0826 12:12:13.612900  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.612913  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:13.612921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:13.612994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:13.649176  152982 cri.go:89] found id: ""
	I0826 12:12:13.649204  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.649220  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:13.649230  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:13.649247  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:13.663880  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:13.663908  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:13.741960  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:13.741982  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:13.741999  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:13.829286  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:13.829342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:13.868186  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:13.868218  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.422802  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:16.436680  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:16.436759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:16.471551  152982 cri.go:89] found id: ""
	I0826 12:12:16.471585  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.471605  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:16.471623  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:16.471695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:16.507468  152982 cri.go:89] found id: ""
	I0826 12:12:16.507504  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.507517  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:16.507526  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:16.507600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:16.542283  152982 cri.go:89] found id: ""
	I0826 12:12:16.542314  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.542325  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:16.542336  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:16.542406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:16.590390  152982 cri.go:89] found id: ""
	I0826 12:12:16.590429  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.590443  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:16.590452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:16.590593  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:16.625344  152982 cri.go:89] found id: ""
	I0826 12:12:16.625371  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.625382  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:16.625389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:16.625463  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:16.660153  152982 cri.go:89] found id: ""
	I0826 12:12:16.660194  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.660204  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:16.660211  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:16.660268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:16.696541  152982 cri.go:89] found id: ""
	I0826 12:12:16.696572  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.696580  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:16.696586  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:16.696655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:14.202993  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.204125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.255066  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:18.752641  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:17.624099  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.122254  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.732416  152982 cri.go:89] found id: ""
	I0826 12:12:16.732448  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.732456  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:16.732469  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:16.732486  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:16.809058  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:16.809106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:16.848200  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:16.848269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.904985  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:16.905033  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:16.918966  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:16.919000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:16.989371  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.490349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:19.502851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:19.502946  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:19.534939  152982 cri.go:89] found id: ""
	I0826 12:12:19.534966  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.534974  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:19.534981  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:19.535036  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:19.567128  152982 cri.go:89] found id: ""
	I0826 12:12:19.567161  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.567177  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:19.567185  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:19.567257  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:19.601548  152982 cri.go:89] found id: ""
	I0826 12:12:19.601580  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.601590  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:19.601598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:19.601670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:19.636903  152982 cri.go:89] found id: ""
	I0826 12:12:19.636930  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.636938  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:19.636949  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:19.637018  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:19.670155  152982 cri.go:89] found id: ""
	I0826 12:12:19.670181  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.670190  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:19.670196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:19.670258  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:19.705052  152982 cri.go:89] found id: ""
	I0826 12:12:19.705079  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.705090  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:19.705099  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:19.705163  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:19.744106  152982 cri.go:89] found id: ""
	I0826 12:12:19.744136  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.744144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:19.744151  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:19.744227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:19.780078  152982 cri.go:89] found id: ""
	I0826 12:12:19.780107  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.780116  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:19.780126  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:19.780138  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:19.831821  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:19.831884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:19.847572  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:19.847610  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:19.924723  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.924745  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:19.924763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:20.001249  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:20.001292  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:18.204529  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.205670  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.703658  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.753284  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.753357  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.122490  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:24.122773  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.540357  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:22.554408  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:22.554483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:22.588270  152982 cri.go:89] found id: ""
	I0826 12:12:22.588298  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.588310  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:22.588329  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:22.588411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:22.623979  152982 cri.go:89] found id: ""
	I0826 12:12:22.624003  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.624011  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:22.624016  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:22.624077  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:22.657151  152982 cri.go:89] found id: ""
	I0826 12:12:22.657185  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.657196  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:22.657204  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:22.657265  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:22.694187  152982 cri.go:89] found id: ""
	I0826 12:12:22.694217  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.694229  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:22.694237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:22.694327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:22.734911  152982 cri.go:89] found id: ""
	I0826 12:12:22.734948  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.734960  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:22.734968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:22.735039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:22.772754  152982 cri.go:89] found id: ""
	I0826 12:12:22.772790  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.772802  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:22.772809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:22.772877  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:22.810340  152982 cri.go:89] found id: ""
	I0826 12:12:22.810376  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.810385  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:22.810392  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:22.810467  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:22.847910  152982 cri.go:89] found id: ""
	I0826 12:12:22.847942  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.847953  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:22.847966  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:22.847984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:22.900871  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:22.900927  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:22.914758  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:22.914790  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:22.981736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:22.981766  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:22.981780  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:23.062669  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:23.062717  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:25.604600  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:25.617474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:25.617584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:25.653870  152982 cri.go:89] found id: ""
	I0826 12:12:25.653904  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.653917  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:25.653925  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:25.653993  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:25.693937  152982 cri.go:89] found id: ""
	I0826 12:12:25.693965  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.693973  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:25.693979  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:25.694039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:25.730590  152982 cri.go:89] found id: ""
	I0826 12:12:25.730622  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.730633  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:25.730640  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:25.730729  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:25.768192  152982 cri.go:89] found id: ""
	I0826 12:12:25.768221  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.768231  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:25.768240  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:25.768296  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:25.808518  152982 cri.go:89] found id: ""
	I0826 12:12:25.808545  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.808553  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:25.808559  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:25.808622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:25.843434  152982 cri.go:89] found id: ""
	I0826 12:12:25.843464  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.843475  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:25.843487  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:25.843561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:25.879093  152982 cri.go:89] found id: ""
	I0826 12:12:25.879124  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.879138  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:25.879146  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:25.879212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:25.915871  152982 cri.go:89] found id: ""
	I0826 12:12:25.915919  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.915932  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:25.915945  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:25.915973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:25.998597  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:25.998652  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:26.038701  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:26.038736  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:26.091618  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:26.091665  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:26.105349  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:26.105383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:26.178337  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:24.704209  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.204036  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:25.253322  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.754717  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:26.123520  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.622019  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.622453  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.679177  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:28.695361  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:28.695455  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:28.734977  152982 cri.go:89] found id: ""
	I0826 12:12:28.735008  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.735026  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:28.735032  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:28.735107  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:28.771634  152982 cri.go:89] found id: ""
	I0826 12:12:28.771665  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.771677  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:28.771685  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:28.771763  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:28.810976  152982 cri.go:89] found id: ""
	I0826 12:12:28.811010  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.811022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:28.811030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:28.811098  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:28.850204  152982 cri.go:89] found id: ""
	I0826 12:12:28.850233  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.850241  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:28.850247  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:28.850300  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:28.888814  152982 cri.go:89] found id: ""
	I0826 12:12:28.888845  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.888852  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:28.888862  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:28.888923  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:28.925203  152982 cri.go:89] found id: ""
	I0826 12:12:28.925251  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.925264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:28.925273  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:28.925359  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:28.963656  152982 cri.go:89] found id: ""
	I0826 12:12:28.963684  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.963700  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:28.963706  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:28.963761  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:28.997644  152982 cri.go:89] found id: ""
	I0826 12:12:28.997677  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.997686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:28.997696  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:28.997711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:29.036668  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:29.036711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:29.089020  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:29.089064  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:29.103051  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:29.103083  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:29.173327  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:29.173363  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:29.173380  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:29.703493  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.709124  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.252850  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:32.254087  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:33.121656  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:35.122979  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.755073  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:31.769098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:31.769194  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:31.811919  152982 cri.go:89] found id: ""
	I0826 12:12:31.811950  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.811970  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:31.811978  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:31.812059  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:31.849728  152982 cri.go:89] found id: ""
	I0826 12:12:31.849760  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.849771  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:31.849778  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:31.849844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:31.884973  152982 cri.go:89] found id: ""
	I0826 12:12:31.885013  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.885022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:31.885030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:31.885088  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:31.925013  152982 cri.go:89] found id: ""
	I0826 12:12:31.925043  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.925052  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:31.925060  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:31.925121  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:31.960066  152982 cri.go:89] found id: ""
	I0826 12:12:31.960101  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.960112  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:31.960130  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:31.960205  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:31.994706  152982 cri.go:89] found id: ""
	I0826 12:12:31.994739  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.994747  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:31.994753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:31.994810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:32.030101  152982 cri.go:89] found id: ""
	I0826 12:12:32.030134  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.030142  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:32.030148  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:32.030213  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:32.064056  152982 cri.go:89] found id: ""
	I0826 12:12:32.064087  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.064095  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:32.064105  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:32.064118  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:32.115930  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:32.115974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:32.144522  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:32.144594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:32.216857  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:32.216886  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:32.216946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:32.293229  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:32.293268  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.833049  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:34.846325  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:34.846389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:34.879253  152982 cri.go:89] found id: ""
	I0826 12:12:34.879282  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.879299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:34.879308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:34.879377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:34.913351  152982 cri.go:89] found id: ""
	I0826 12:12:34.913381  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.913393  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:34.913401  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:34.913487  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:34.946929  152982 cri.go:89] found id: ""
	I0826 12:12:34.946958  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.946966  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:34.946972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:34.947040  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:34.980517  152982 cri.go:89] found id: ""
	I0826 12:12:34.980559  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.980571  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:34.980580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:34.980651  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:35.015853  152982 cri.go:89] found id: ""
	I0826 12:12:35.015886  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.015894  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:35.015909  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:35.015972  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:35.053568  152982 cri.go:89] found id: ""
	I0826 12:12:35.053597  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.053606  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:35.053613  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:35.053667  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:35.091369  152982 cri.go:89] found id: ""
	I0826 12:12:35.091398  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.091408  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:35.091415  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:35.091483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:35.129233  152982 cri.go:89] found id: ""
	I0826 12:12:35.129259  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.129267  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:35.129276  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:35.129288  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:35.181977  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:35.182016  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:35.195780  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:35.195812  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:35.274390  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:35.274416  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:35.274433  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:35.353774  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:35.353819  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.203244  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:36.703229  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:34.754010  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.253336  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.253674  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.622257  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.622967  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.894664  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:37.908390  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:37.908480  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:37.943642  152982 cri.go:89] found id: ""
	I0826 12:12:37.943669  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.943681  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:37.943689  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:37.943759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:37.978371  152982 cri.go:89] found id: ""
	I0826 12:12:37.978407  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.978418  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:37.978426  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:37.978497  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:38.014205  152982 cri.go:89] found id: ""
	I0826 12:12:38.014237  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.014248  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:38.014255  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:38.014326  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:38.048705  152982 cri.go:89] found id: ""
	I0826 12:12:38.048737  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.048748  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:38.048758  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:38.048824  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:38.085009  152982 cri.go:89] found id: ""
	I0826 12:12:38.085039  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.085050  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:38.085058  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:38.085147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:38.125923  152982 cri.go:89] found id: ""
	I0826 12:12:38.125949  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.125960  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:38.125968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:38.126038  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:38.161460  152982 cri.go:89] found id: ""
	I0826 12:12:38.161492  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.161504  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:38.161512  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:38.161584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:38.194433  152982 cri.go:89] found id: ""
	I0826 12:12:38.194462  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.194472  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:38.194481  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:38.194494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.245809  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:38.245854  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:38.261100  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:38.261141  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:38.329187  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:38.329218  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:38.329237  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:38.416798  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:38.416844  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:40.962763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:40.976214  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:40.976287  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:41.010312  152982 cri.go:89] found id: ""
	I0826 12:12:41.010346  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.010356  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:41.010363  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:41.010422  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:41.051708  152982 cri.go:89] found id: ""
	I0826 12:12:41.051738  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.051746  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:41.051753  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:41.051818  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:41.087107  152982 cri.go:89] found id: ""
	I0826 12:12:41.087140  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.087152  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:41.087161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:41.087238  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:41.125099  152982 cri.go:89] found id: ""
	I0826 12:12:41.125132  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.125144  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:41.125153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:41.125216  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:41.160192  152982 cri.go:89] found id: ""
	I0826 12:12:41.160220  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.160227  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:41.160234  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:41.160291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:41.193507  152982 cri.go:89] found id: ""
	I0826 12:12:41.193536  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.193548  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:41.193557  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:41.193650  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:41.235788  152982 cri.go:89] found id: ""
	I0826 12:12:41.235827  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.235835  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:41.235841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:41.235897  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:41.271720  152982 cri.go:89] found id: ""
	I0826 12:12:41.271755  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.271770  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:41.271780  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:41.271794  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:41.285694  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:41.285731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:41.351221  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:41.351245  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:41.351261  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:41.434748  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:41.434792  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:41.472446  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:41.472477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.704389  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.204525  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.752919  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:43.753710  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:42.123210  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.623786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.022222  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:44.036128  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:44.036201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:44.071142  152982 cri.go:89] found id: ""
	I0826 12:12:44.071177  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.071187  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:44.071196  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:44.071267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:44.105068  152982 cri.go:89] found id: ""
	I0826 12:12:44.105101  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.105110  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:44.105116  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:44.105184  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:44.140069  152982 cri.go:89] found id: ""
	I0826 12:12:44.140102  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.140113  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:44.140121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:44.140190  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:44.177686  152982 cri.go:89] found id: ""
	I0826 12:12:44.177724  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.177736  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:44.177744  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:44.177819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:44.214326  152982 cri.go:89] found id: ""
	I0826 12:12:44.214356  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.214364  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:44.214371  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:44.214426  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:44.251675  152982 cri.go:89] found id: ""
	I0826 12:12:44.251703  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.251711  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:44.251718  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:44.251776  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:44.303077  152982 cri.go:89] found id: ""
	I0826 12:12:44.303107  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.303116  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:44.303122  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:44.303183  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:44.355913  152982 cri.go:89] found id: ""
	I0826 12:12:44.355944  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.355952  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:44.355962  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:44.355974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:44.421610  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:44.421653  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:44.435567  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:44.435603  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:44.501406  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:44.501427  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:44.501440  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:44.582723  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:44.582763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:43.703519  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.202958  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.253330  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:48.753043  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.122857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:49.621786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.124026  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:47.139183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:47.139260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:47.175395  152982 cri.go:89] found id: ""
	I0826 12:12:47.175424  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.175440  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:47.175450  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:47.175514  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:47.214536  152982 cri.go:89] found id: ""
	I0826 12:12:47.214568  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.214580  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:47.214588  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:47.214655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:47.255297  152982 cri.go:89] found id: ""
	I0826 12:12:47.255321  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.255329  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:47.255335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:47.255402  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:47.290638  152982 cri.go:89] found id: ""
	I0826 12:12:47.290666  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.290675  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:47.290681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:47.290736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:47.327313  152982 cri.go:89] found id: ""
	I0826 12:12:47.327345  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.327352  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:47.327359  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:47.327425  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:47.366221  152982 cri.go:89] found id: ""
	I0826 12:12:47.366256  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.366264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:47.366274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:47.366331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:47.401043  152982 cri.go:89] found id: ""
	I0826 12:12:47.401077  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.401088  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:47.401095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:47.401166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:47.435800  152982 cri.go:89] found id: ""
	I0826 12:12:47.435837  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.435848  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:47.435860  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:47.435881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:47.487917  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:47.487955  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:47.501696  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:47.501731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:47.569026  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:47.569053  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:47.569069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:47.651002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:47.651049  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:50.192329  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:50.213937  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:50.214017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:50.253835  152982 cri.go:89] found id: ""
	I0826 12:12:50.253868  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.253879  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:50.253890  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:50.253957  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:50.296898  152982 cri.go:89] found id: ""
	I0826 12:12:50.296928  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.296939  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:50.296946  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:50.297016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:50.350327  152982 cri.go:89] found id: ""
	I0826 12:12:50.350356  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.350365  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:50.350375  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:50.350443  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:50.385191  152982 cri.go:89] found id: ""
	I0826 12:12:50.385225  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.385236  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:50.385243  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:50.385309  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:50.418371  152982 cri.go:89] found id: ""
	I0826 12:12:50.418412  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.418423  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:50.418432  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:50.418505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:50.450924  152982 cri.go:89] found id: ""
	I0826 12:12:50.450956  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.450965  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:50.450972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:50.451043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:50.485695  152982 cri.go:89] found id: ""
	I0826 12:12:50.485728  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.485739  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:50.485748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:50.485819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:50.519570  152982 cri.go:89] found id: ""
	I0826 12:12:50.519609  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.519622  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:50.519633  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:50.519650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:50.572959  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:50.573001  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:50.586794  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:50.586826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:50.654148  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:50.654180  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:50.654255  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:50.738067  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:50.738107  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:48.203018  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.205528  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.704054  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.758038  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.252772  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.121906  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:54.622553  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.281246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:53.296023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:53.296103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:53.333031  152982 cri.go:89] found id: ""
	I0826 12:12:53.333073  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.333092  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:53.333100  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:53.333171  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:53.367753  152982 cri.go:89] found id: ""
	I0826 12:12:53.367782  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.367791  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:53.367796  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:53.367849  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:53.403702  152982 cri.go:89] found id: ""
	I0826 12:12:53.403733  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.403745  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:53.403753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:53.403823  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:53.439911  152982 cri.go:89] found id: ""
	I0826 12:12:53.439939  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.439947  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:53.439953  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:53.440008  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:53.475053  152982 cri.go:89] found id: ""
	I0826 12:12:53.475079  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.475088  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:53.475094  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:53.475152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:53.509087  152982 cri.go:89] found id: ""
	I0826 12:12:53.509117  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.509128  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:53.509136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:53.509207  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:53.546090  152982 cri.go:89] found id: ""
	I0826 12:12:53.546123  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.546133  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:53.546139  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:53.546195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:53.581675  152982 cri.go:89] found id: ""
	I0826 12:12:53.581713  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.581727  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:53.581741  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:53.581756  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:53.632866  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:53.632929  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:53.646045  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:53.646079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:53.716768  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:53.716798  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:53.716814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:53.799490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:53.799541  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.340389  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:56.353305  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:56.353377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:56.389690  152982 cri.go:89] found id: ""
	I0826 12:12:56.389725  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.389733  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:56.389741  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:56.389797  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:56.423214  152982 cri.go:89] found id: ""
	I0826 12:12:56.423245  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.423253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:56.423260  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:56.423315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:56.459033  152982 cri.go:89] found id: ""
	I0826 12:12:56.459069  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.459077  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:56.459083  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:56.459141  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:56.494408  152982 cri.go:89] found id: ""
	I0826 12:12:56.494437  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.494446  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:56.494453  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:56.494507  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:56.533471  152982 cri.go:89] found id: ""
	I0826 12:12:56.533506  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.533517  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:56.533525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:56.533595  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:56.572644  152982 cri.go:89] found id: ""
	I0826 12:12:56.572675  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.572685  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:56.572690  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:56.572769  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:56.610948  152982 cri.go:89] found id: ""
	I0826 12:12:56.610978  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.610989  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:56.610997  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:56.611161  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:56.651352  152982 cri.go:89] found id: ""
	I0826 12:12:56.651391  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.651406  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:56.651419  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:56.651446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:56.666627  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:56.666664  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 12:12:54.704640  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:56.704830  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:55.253572  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.754403  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.122603  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.623004  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	W0826 12:12:56.741054  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:56.741087  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:56.741106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:56.818138  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:56.818194  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.858182  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:56.858216  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.412428  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:59.426340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:59.426410  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:59.459975  152982 cri.go:89] found id: ""
	I0826 12:12:59.460011  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.460021  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:59.460027  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:59.460082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:59.491890  152982 cri.go:89] found id: ""
	I0826 12:12:59.491918  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.491928  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:59.491934  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:59.491994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:59.527284  152982 cri.go:89] found id: ""
	I0826 12:12:59.527318  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.527330  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:59.527339  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:59.527411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:59.560996  152982 cri.go:89] found id: ""
	I0826 12:12:59.561027  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.561036  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:59.561042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:59.561096  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:59.595827  152982 cri.go:89] found id: ""
	I0826 12:12:59.595858  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.595866  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:59.595882  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:59.595970  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:59.632943  152982 cri.go:89] found id: ""
	I0826 12:12:59.632981  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.632993  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:59.633001  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:59.633071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:59.669013  152982 cri.go:89] found id: ""
	I0826 12:12:59.669047  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.669057  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:59.669065  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:59.669139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:59.703286  152982 cri.go:89] found id: ""
	I0826 12:12:59.703320  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.703331  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:59.703342  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:59.703359  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.756848  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:59.756882  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:59.770551  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:59.770592  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:59.842153  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:59.842176  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:59.842190  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:59.925190  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:59.925231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:59.203898  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.703960  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.755160  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.252684  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.253046  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.623605  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.122069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.464977  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:02.478901  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:02.478991  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:02.514845  152982 cri.go:89] found id: ""
	I0826 12:13:02.514890  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.514903  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:02.514912  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:02.514980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:02.550867  152982 cri.go:89] found id: ""
	I0826 12:13:02.550899  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.550910  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:02.550918  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:02.550988  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:02.585494  152982 cri.go:89] found id: ""
	I0826 12:13:02.585522  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.585531  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:02.585537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:02.585617  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:02.623561  152982 cri.go:89] found id: ""
	I0826 12:13:02.623603  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.623619  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:02.623630  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:02.623696  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:02.660636  152982 cri.go:89] found id: ""
	I0826 12:13:02.660665  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.660675  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:02.660683  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:02.660760  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:02.696140  152982 cri.go:89] found id: ""
	I0826 12:13:02.696173  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.696184  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:02.696192  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:02.696260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:02.735056  152982 cri.go:89] found id: ""
	I0826 12:13:02.735098  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.735111  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:02.735121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:02.735201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:02.770841  152982 cri.go:89] found id: ""
	I0826 12:13:02.770886  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.770899  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:02.770911  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:02.770928  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:02.845458  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:02.845498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:02.885537  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:02.885574  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:02.935507  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:02.935560  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:02.950010  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:02.950046  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:03.018963  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.520071  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:05.535473  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:05.535554  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:05.572890  152982 cri.go:89] found id: ""
	I0826 12:13:05.572923  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.572934  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:05.572942  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:05.573019  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:05.610469  152982 cri.go:89] found id: ""
	I0826 12:13:05.610503  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.610515  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:05.610522  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:05.610586  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:05.647446  152982 cri.go:89] found id: ""
	I0826 12:13:05.647480  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.647489  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:05.647495  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:05.647561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:05.686619  152982 cri.go:89] found id: ""
	I0826 12:13:05.686660  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.686672  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:05.686681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:05.686754  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:05.725893  152982 cri.go:89] found id: ""
	I0826 12:13:05.725927  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.725936  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:05.725943  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:05.726034  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:05.761052  152982 cri.go:89] found id: ""
	I0826 12:13:05.761081  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.761089  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:05.761095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:05.761147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:05.795336  152982 cri.go:89] found id: ""
	I0826 12:13:05.795367  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.795379  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:05.795387  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:05.795447  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:05.834397  152982 cri.go:89] found id: ""
	I0826 12:13:05.834441  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.834449  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:05.834459  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:05.834472  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:05.847882  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:05.847919  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:05.921941  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.921965  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:05.921982  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:06.001380  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:06.001424  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:06.040519  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:06.040555  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:04.203987  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.704484  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.752615  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.753340  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.122654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.122742  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.123434  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.591761  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:08.604628  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:08.604724  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:08.639915  152982 cri.go:89] found id: ""
	I0826 12:13:08.639948  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.639957  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:08.639963  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:08.640025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:08.684479  152982 cri.go:89] found id: ""
	I0826 12:13:08.684513  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.684526  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:08.684535  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:08.684613  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:08.724083  152982 cri.go:89] found id: ""
	I0826 12:13:08.724112  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.724121  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:08.724127  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:08.724182  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:08.760781  152982 cri.go:89] found id: ""
	I0826 12:13:08.760830  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.760842  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:08.760851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:08.760943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:08.795685  152982 cri.go:89] found id: ""
	I0826 12:13:08.795715  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.795723  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:08.795730  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:08.795786  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:08.832123  152982 cri.go:89] found id: ""
	I0826 12:13:08.832152  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.832161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:08.832167  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:08.832227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:08.869701  152982 cri.go:89] found id: ""
	I0826 12:13:08.869735  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.869752  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:08.869760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:08.869827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:08.905399  152982 cri.go:89] found id: ""
	I0826 12:13:08.905444  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.905455  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:08.905469  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:08.905485  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:08.956814  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:08.956857  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:08.971618  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:08.971656  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:09.039360  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:09.039389  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:09.039407  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:09.113464  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:09.113509  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:11.658989  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:11.671816  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:11.671898  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:11.707124  152982 cri.go:89] found id: ""
	I0826 12:13:11.707150  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.707158  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:11.707165  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:11.707230  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:09.203816  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.203914  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.757254  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:13.252482  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:12.624138  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.123672  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.743127  152982 cri.go:89] found id: ""
	I0826 12:13:11.743155  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.743163  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:11.743169  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:11.743249  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:11.777664  152982 cri.go:89] found id: ""
	I0826 12:13:11.777693  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.777701  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:11.777707  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:11.777766  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:11.811555  152982 cri.go:89] found id: ""
	I0826 12:13:11.811585  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.811593  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:11.811599  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:11.811658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:11.846187  152982 cri.go:89] found id: ""
	I0826 12:13:11.846216  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.846223  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:11.846229  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:11.846291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:11.882261  152982 cri.go:89] found id: ""
	I0826 12:13:11.882292  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.882310  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:11.882318  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:11.882386  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:11.920538  152982 cri.go:89] found id: ""
	I0826 12:13:11.920572  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.920583  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:11.920590  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:11.920658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:11.955402  152982 cri.go:89] found id: ""
	I0826 12:13:11.955435  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.955446  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:11.955456  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:11.955473  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:12.007676  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:12.007723  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:12.021378  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:12.021417  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:12.087841  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:12.087868  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:12.087883  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:12.170948  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:12.170991  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:14.712383  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:14.724904  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:14.724982  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:14.759675  152982 cri.go:89] found id: ""
	I0826 12:13:14.759703  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.759711  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:14.759717  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:14.759784  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:14.794440  152982 cri.go:89] found id: ""
	I0826 12:13:14.794471  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.794480  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:14.794488  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:14.794542  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:14.832392  152982 cri.go:89] found id: ""
	I0826 12:13:14.832442  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.832452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:14.832459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:14.832524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:14.870231  152982 cri.go:89] found id: ""
	I0826 12:13:14.870262  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.870273  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:14.870281  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:14.870339  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:14.909480  152982 cri.go:89] found id: ""
	I0826 12:13:14.909517  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.909529  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:14.909536  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:14.909596  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:14.950957  152982 cri.go:89] found id: ""
	I0826 12:13:14.950986  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.950997  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:14.951005  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:14.951071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:14.995137  152982 cri.go:89] found id: ""
	I0826 12:13:14.995165  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.995176  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:14.995183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:14.995252  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:15.029939  152982 cri.go:89] found id: ""
	I0826 12:13:15.029969  152982 logs.go:276] 0 containers: []
	W0826 12:13:15.029978  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:15.029987  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:15.030000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:15.106633  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:15.106675  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:15.152575  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:15.152613  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:15.205645  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:15.205689  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:15.220325  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:15.220369  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:15.289698  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:13.705307  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:16.203733  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.253098  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.253276  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.752313  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.621549  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.622504  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.790709  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:17.804332  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:17.804398  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:17.839735  152982 cri.go:89] found id: ""
	I0826 12:13:17.839779  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.839791  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:17.839803  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:17.839885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:17.875476  152982 cri.go:89] found id: ""
	I0826 12:13:17.875510  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.875521  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:17.875529  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:17.875606  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:17.911715  152982 cri.go:89] found id: ""
	I0826 12:13:17.911745  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.911753  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:17.911760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:17.911822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:17.949059  152982 cri.go:89] found id: ""
	I0826 12:13:17.949094  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.949102  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:17.949109  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:17.949166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:17.985319  152982 cri.go:89] found id: ""
	I0826 12:13:17.985365  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.985376  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:17.985385  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:17.985481  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:18.019796  152982 cri.go:89] found id: ""
	I0826 12:13:18.019839  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.019858  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:18.019867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:18.019931  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:18.053910  152982 cri.go:89] found id: ""
	I0826 12:13:18.053941  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.053953  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:18.053960  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:18.054039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:18.089854  152982 cri.go:89] found id: ""
	I0826 12:13:18.089888  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.089901  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:18.089917  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:18.089934  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:18.143026  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:18.143070  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.156710  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:18.156740  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:18.222894  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:18.222929  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:18.222946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:18.298729  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:18.298777  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:20.837506  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:20.851070  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:20.851152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:20.886253  152982 cri.go:89] found id: ""
	I0826 12:13:20.886289  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.886299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:20.886308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:20.886384  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:20.923773  152982 cri.go:89] found id: ""
	I0826 12:13:20.923803  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.923821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:20.923827  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:20.923884  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:20.959117  152982 cri.go:89] found id: ""
	I0826 12:13:20.959151  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.959162  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:20.959170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:20.959239  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:20.994088  152982 cri.go:89] found id: ""
	I0826 12:13:20.994121  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.994131  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:20.994138  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:20.994203  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:21.031140  152982 cri.go:89] found id: ""
	I0826 12:13:21.031171  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.031183  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:21.031198  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:21.031267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:21.064624  152982 cri.go:89] found id: ""
	I0826 12:13:21.064654  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.064666  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:21.064674  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:21.064743  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:21.100146  152982 cri.go:89] found id: ""
	I0826 12:13:21.100182  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.100190  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:21.100197  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:21.100268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:21.149001  152982 cri.go:89] found id: ""
	I0826 12:13:21.149031  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.149040  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:21.149054  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:21.149074  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:21.229783  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:21.229809  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:21.229826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:21.305579  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:21.305619  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:21.343856  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:21.343884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:21.394183  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:21.394231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.205132  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:20.704261  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:21.754167  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.253321  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:22.123356  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.621337  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:23.908368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:23.922748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:23.922840  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:23.964168  152982 cri.go:89] found id: ""
	I0826 12:13:23.964199  152982 logs.go:276] 0 containers: []
	W0826 12:13:23.964209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:23.964218  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:23.964290  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:24.001156  152982 cri.go:89] found id: ""
	I0826 12:13:24.001186  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.001199  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:24.001204  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:24.001268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:24.040001  152982 cri.go:89] found id: ""
	I0826 12:13:24.040037  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.040057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:24.040067  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:24.040139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:24.076901  152982 cri.go:89] found id: ""
	I0826 12:13:24.076940  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.076948  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:24.076955  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:24.077028  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:24.129347  152982 cri.go:89] found id: ""
	I0826 12:13:24.129375  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.129383  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:24.129389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:24.129457  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:24.169634  152982 cri.go:89] found id: ""
	I0826 12:13:24.169666  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.169678  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:24.169685  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:24.169740  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:24.206976  152982 cri.go:89] found id: ""
	I0826 12:13:24.207006  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.207015  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:24.207023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:24.207092  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:24.243755  152982 cri.go:89] found id: ""
	I0826 12:13:24.243790  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.243800  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:24.243812  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:24.243829  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:24.323085  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:24.323131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:24.362404  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:24.362436  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:24.411863  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:24.411910  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:24.425742  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:24.425776  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:24.492510  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:23.203855  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:25.704793  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.753722  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:28.753791  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.622857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:29.122053  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.993510  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:27.007233  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:27.007304  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:27.041360  152982 cri.go:89] found id: ""
	I0826 12:13:27.041392  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.041401  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:27.041407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:27.041470  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:27.076040  152982 cri.go:89] found id: ""
	I0826 12:13:27.076069  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.076080  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:27.076088  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:27.076160  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:27.114288  152982 cri.go:89] found id: ""
	I0826 12:13:27.114325  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.114336  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:27.114345  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:27.114418  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:27.148538  152982 cri.go:89] found id: ""
	I0826 12:13:27.148572  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.148582  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:27.148588  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:27.148665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:27.182331  152982 cri.go:89] found id: ""
	I0826 12:13:27.182362  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.182373  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:27.182382  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:27.182453  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:27.218645  152982 cri.go:89] found id: ""
	I0826 12:13:27.218698  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.218710  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:27.218720  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:27.218798  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:27.254987  152982 cri.go:89] found id: ""
	I0826 12:13:27.255021  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.255031  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:27.255037  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:27.255097  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:27.289633  152982 cri.go:89] found id: ""
	I0826 12:13:27.289662  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.289672  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:27.289683  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:27.289705  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:27.338387  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:27.338429  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:27.353764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:27.353799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:27.425833  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:27.425855  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:27.425870  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:27.507035  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:27.507078  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.047763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:30.063283  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:30.063382  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:30.100161  152982 cri.go:89] found id: ""
	I0826 12:13:30.100194  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.100207  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:30.100215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:30.100276  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:30.136507  152982 cri.go:89] found id: ""
	I0826 12:13:30.136542  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.136554  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:30.136561  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:30.136632  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:30.170023  152982 cri.go:89] found id: ""
	I0826 12:13:30.170058  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.170066  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:30.170071  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:30.170128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:30.204979  152982 cri.go:89] found id: ""
	I0826 12:13:30.205022  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.205032  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:30.205062  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:30.205135  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:30.242407  152982 cri.go:89] found id: ""
	I0826 12:13:30.242442  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.242455  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:30.242463  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:30.242532  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:30.280569  152982 cri.go:89] found id: ""
	I0826 12:13:30.280607  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.280619  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:30.280627  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:30.280684  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:30.317846  152982 cri.go:89] found id: ""
	I0826 12:13:30.317882  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.317892  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:30.317906  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:30.318011  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:30.354637  152982 cri.go:89] found id: ""
	I0826 12:13:30.354675  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.354686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:30.354698  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:30.354715  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:30.434983  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:30.435032  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.474170  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:30.474214  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:30.541092  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:30.541133  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:30.566671  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:30.566707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:30.659622  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:28.203031  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.204134  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:32.703767  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.754563  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.253557  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:31.122121  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.125357  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.622870  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.160831  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:33.174476  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:33.174556  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:33.213402  152982 cri.go:89] found id: ""
	I0826 12:13:33.213433  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.213441  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:33.213447  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:33.213505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:33.251024  152982 cri.go:89] found id: ""
	I0826 12:13:33.251056  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.251064  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:33.251070  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:33.251134  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:33.288839  152982 cri.go:89] found id: ""
	I0826 12:13:33.288873  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.288882  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:33.288889  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:33.288961  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:33.324289  152982 cri.go:89] found id: ""
	I0826 12:13:33.324321  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.324329  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:33.324335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:33.324404  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:33.358921  152982 cri.go:89] found id: ""
	I0826 12:13:33.358953  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.358961  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:33.358968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:33.359025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:33.394579  152982 cri.go:89] found id: ""
	I0826 12:13:33.394615  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.394623  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:33.394629  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:33.394700  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:33.429750  152982 cri.go:89] found id: ""
	I0826 12:13:33.429782  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.429794  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:33.429802  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:33.429863  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:33.465857  152982 cri.go:89] found id: ""
	I0826 12:13:33.465895  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.465908  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:33.465921  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:33.465939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:33.506312  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:33.506344  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:33.557235  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:33.557279  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:33.570259  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:33.570293  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:33.638927  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:33.638952  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:33.638973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:36.217153  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:36.230544  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:36.230630  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:36.283359  152982 cri.go:89] found id: ""
	I0826 12:13:36.283394  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.283405  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:36.283413  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:36.283486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:36.327991  152982 cri.go:89] found id: ""
	I0826 12:13:36.328017  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.328026  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:36.328031  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:36.328095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:36.380106  152982 cri.go:89] found id: ""
	I0826 12:13:36.380137  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.380147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:36.380154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:36.380212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:36.415844  152982 cri.go:89] found id: ""
	I0826 12:13:36.415872  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.415880  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:36.415886  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:36.415939  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:36.451058  152982 cri.go:89] found id: ""
	I0826 12:13:36.451131  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.451158  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:36.451168  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:36.451235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:36.485814  152982 cri.go:89] found id: ""
	I0826 12:13:36.485845  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.485856  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:36.485864  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:36.485943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:36.520811  152982 cri.go:89] found id: ""
	I0826 12:13:36.520848  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.520865  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:36.520876  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:36.520952  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:36.557835  152982 cri.go:89] found id: ""
	I0826 12:13:36.557866  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.557877  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:36.557897  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:36.557915  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:36.609551  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:36.609594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:36.624424  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:36.624453  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:36.697267  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:36.697294  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:36.697312  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:34.704284  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.203717  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.752752  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:38.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.622907  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.121820  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:36.781810  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:36.781862  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.326306  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:39.340161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:39.340229  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:39.373614  152982 cri.go:89] found id: ""
	I0826 12:13:39.373646  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.373655  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:39.373664  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:39.373732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:39.408021  152982 cri.go:89] found id: ""
	I0826 12:13:39.408059  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.408067  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:39.408073  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:39.408127  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:39.450503  152982 cri.go:89] found id: ""
	I0826 12:13:39.450531  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.450541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:39.450549  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:39.450624  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:39.487553  152982 cri.go:89] found id: ""
	I0826 12:13:39.487585  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.487596  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:39.487625  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:39.487695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:39.524701  152982 cri.go:89] found id: ""
	I0826 12:13:39.524734  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.524745  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:39.524753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:39.524822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:39.557863  152982 cri.go:89] found id: ""
	I0826 12:13:39.557893  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.557903  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:39.557911  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:39.557979  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:39.593456  152982 cri.go:89] found id: ""
	I0826 12:13:39.593486  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.593496  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:39.593504  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:39.593577  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:39.628444  152982 cri.go:89] found id: ""
	I0826 12:13:39.628472  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.628481  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:39.628490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:39.628503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.668929  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:39.668967  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:39.724948  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:39.725003  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:39.740014  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:39.740060  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:39.814786  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:39.814811  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:39.814828  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:39.704050  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:41.704769  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.752827  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.753423  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.122285  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.622043  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.393781  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:42.407529  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:42.407620  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:42.444273  152982 cri.go:89] found id: ""
	I0826 12:13:42.444305  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.444314  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:42.444321  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:42.444389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:42.478683  152982 cri.go:89] found id: ""
	I0826 12:13:42.478724  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.478734  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:42.478741  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:42.478803  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:42.520650  152982 cri.go:89] found id: ""
	I0826 12:13:42.520684  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.520708  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:42.520715  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:42.520774  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:42.558610  152982 cri.go:89] found id: ""
	I0826 12:13:42.558656  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.558667  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:42.558677  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:42.558750  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:42.593960  152982 cri.go:89] found id: ""
	I0826 12:13:42.593991  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.593999  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:42.594006  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:42.594064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:42.628257  152982 cri.go:89] found id: ""
	I0826 12:13:42.628284  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.628294  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:42.628300  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:42.628372  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:42.669894  152982 cri.go:89] found id: ""
	I0826 12:13:42.669933  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.669946  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:42.669956  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:42.670029  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:42.707893  152982 cri.go:89] found id: ""
	I0826 12:13:42.707923  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.707934  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:42.707946  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:42.707962  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:42.760778  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:42.760823  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:42.773718  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:42.773753  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:42.855780  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:42.855813  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:42.855831  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:42.934872  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:42.934925  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:45.473505  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:45.488485  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:45.488582  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:45.524355  152982 cri.go:89] found id: ""
	I0826 12:13:45.524387  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.524398  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:45.524407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:45.524474  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:45.563731  152982 cri.go:89] found id: ""
	I0826 12:13:45.563758  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.563767  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:45.563772  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:45.563832  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:45.595876  152982 cri.go:89] found id: ""
	I0826 12:13:45.595910  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.595918  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:45.595924  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:45.595977  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:45.629212  152982 cri.go:89] found id: ""
	I0826 12:13:45.629246  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.629256  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:45.629262  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:45.629316  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:45.662718  152982 cri.go:89] found id: ""
	I0826 12:13:45.662748  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.662759  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:45.662766  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:45.662851  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:45.697540  152982 cri.go:89] found id: ""
	I0826 12:13:45.697573  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.697585  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:45.697598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:45.697670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:45.738012  152982 cri.go:89] found id: ""
	I0826 12:13:45.738054  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.738067  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:45.738077  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:45.738174  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:45.778322  152982 cri.go:89] found id: ""
	I0826 12:13:45.778352  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.778364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:45.778376  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:45.778395  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:45.830530  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:45.830570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:45.845289  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:45.845335  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:45.918490  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:45.918514  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:45.918528  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:45.998762  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:45.998806  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:44.204527  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.204789  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.753605  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.754396  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.255176  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.622584  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.122691  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:48.540076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:48.554537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:48.554616  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:48.589750  152982 cri.go:89] found id: ""
	I0826 12:13:48.589783  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.589792  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:48.589799  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:48.589866  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.628496  152982 cri.go:89] found id: ""
	I0826 12:13:48.628530  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.628540  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:48.628557  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:48.628635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:48.670630  152982 cri.go:89] found id: ""
	I0826 12:13:48.670667  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.670678  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:48.670686  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:48.670756  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:48.707510  152982 cri.go:89] found id: ""
	I0826 12:13:48.707543  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.707564  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:48.707572  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:48.707642  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:48.752189  152982 cri.go:89] found id: ""
	I0826 12:13:48.752222  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.752231  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:48.752237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:48.752306  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:48.788294  152982 cri.go:89] found id: ""
	I0826 12:13:48.788332  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.788356  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:48.788364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:48.788439  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:48.822728  152982 cri.go:89] found id: ""
	I0826 12:13:48.822755  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.822765  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:48.822771  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:48.822850  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:48.859237  152982 cri.go:89] found id: ""
	I0826 12:13:48.859270  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.859280  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:48.859293  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:48.859310  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:48.944271  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:48.944322  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:48.983438  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:48.983477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:49.036463  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:49.036511  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:49.051081  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:49.051123  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:49.127953  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:51.629023  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:51.643644  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:51.643728  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:51.684273  152982 cri.go:89] found id: ""
	I0826 12:13:51.684310  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.684323  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:51.684331  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:51.684401  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.703794  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:50.703872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:52.705329  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.753669  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.252960  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.623221  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.121867  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.720561  152982 cri.go:89] found id: ""
	I0826 12:13:51.720600  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.720610  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:51.720616  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:51.720690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:51.758023  152982 cri.go:89] found id: ""
	I0826 12:13:51.758049  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.758057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:51.758063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:51.758123  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:51.797029  152982 cri.go:89] found id: ""
	I0826 12:13:51.797063  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.797075  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:51.797082  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:51.797150  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:51.832002  152982 cri.go:89] found id: ""
	I0826 12:13:51.832032  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.832043  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:51.832051  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:51.832122  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:51.867042  152982 cri.go:89] found id: ""
	I0826 12:13:51.867074  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.867083  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:51.867090  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:51.867155  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:51.904887  152982 cri.go:89] found id: ""
	I0826 12:13:51.904919  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.904931  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:51.904938  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:51.905005  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:51.940628  152982 cri.go:89] found id: ""
	I0826 12:13:51.940662  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.940674  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:51.940686  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:51.940703  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:51.979988  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:51.980021  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:52.033297  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:52.033338  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:52.047004  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:52.047039  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:52.126136  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:52.126163  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:52.126176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:54.711457  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:54.726419  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:54.726510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:54.773253  152982 cri.go:89] found id: ""
	I0826 12:13:54.773290  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.773304  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:54.773324  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:54.773397  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:54.812175  152982 cri.go:89] found id: ""
	I0826 12:13:54.812211  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.812232  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:54.812239  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:54.812298  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:54.848673  152982 cri.go:89] found id: ""
	I0826 12:13:54.848702  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.848710  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:54.848717  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:54.848782  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:54.884211  152982 cri.go:89] found id: ""
	I0826 12:13:54.884239  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.884252  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:54.884259  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:54.884329  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:54.925279  152982 cri.go:89] found id: ""
	I0826 12:13:54.925312  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.925323  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:54.925331  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:54.925406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:54.961004  152982 cri.go:89] found id: ""
	I0826 12:13:54.961035  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.961043  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:54.961050  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:54.961114  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:54.998689  152982 cri.go:89] found id: ""
	I0826 12:13:54.998720  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.998730  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:54.998737  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:54.998810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:55.033540  152982 cri.go:89] found id: ""
	I0826 12:13:55.033671  152982 logs.go:276] 0 containers: []
	W0826 12:13:55.033683  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:55.033696  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:55.033713  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:55.082966  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:55.083006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:55.096472  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:55.096503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:55.166868  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:55.166899  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:55.166917  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:55.260596  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:55.260637  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:55.206106  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.704214  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.253114  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.254749  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.122385  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.124183  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:00.622721  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.804727  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:57.818098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:57.818188  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:57.852777  152982 cri.go:89] found id: ""
	I0826 12:13:57.852819  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.852832  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:57.852841  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:57.852906  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:57.888778  152982 cri.go:89] found id: ""
	I0826 12:13:57.888815  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.888832  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:57.888840  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:57.888924  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:57.927398  152982 cri.go:89] found id: ""
	I0826 12:13:57.927432  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.927444  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:57.927452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:57.927527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:57.965373  152982 cri.go:89] found id: ""
	I0826 12:13:57.965402  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.965420  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:57.965425  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:57.965488  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:57.999334  152982 cri.go:89] found id: ""
	I0826 12:13:57.999366  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.999374  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:57.999380  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:57.999441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:58.035268  152982 cri.go:89] found id: ""
	I0826 12:13:58.035299  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.035308  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:58.035313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:58.035373  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:58.070055  152982 cri.go:89] found id: ""
	I0826 12:13:58.070088  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.070099  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:58.070107  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:58.070176  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:58.104845  152982 cri.go:89] found id: ""
	I0826 12:13:58.104882  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.104893  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:58.104906  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:58.104923  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:58.149392  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:58.149427  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:58.201310  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:58.201345  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:58.217027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:58.217067  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:58.301347  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.301372  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:58.301389  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:00.881924  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:00.897716  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:14:00.897804  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:14:00.934959  152982 cri.go:89] found id: ""
	I0826 12:14:00.934993  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.935005  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:14:00.935013  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:14:00.935086  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:14:00.969225  152982 cri.go:89] found id: ""
	I0826 12:14:00.969257  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.969266  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:14:00.969272  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:14:00.969344  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:14:01.004010  152982 cri.go:89] found id: ""
	I0826 12:14:01.004047  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.004057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:14:01.004063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:14:01.004136  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:14:01.039659  152982 cri.go:89] found id: ""
	I0826 12:14:01.039689  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.039697  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:14:01.039704  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:14:01.039758  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:14:01.073234  152982 cri.go:89] found id: ""
	I0826 12:14:01.073266  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.073278  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:14:01.073293  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:14:01.073370  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:14:01.111187  152982 cri.go:89] found id: ""
	I0826 12:14:01.111229  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.111243  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:14:01.111261  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:14:01.111331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:14:01.145754  152982 cri.go:89] found id: ""
	I0826 12:14:01.145791  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.145803  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:14:01.145811  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:14:01.145885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:14:01.182342  152982 cri.go:89] found id: ""
	I0826 12:14:01.182386  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.182398  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:14:01.182412  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:14:01.182434  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:01.266710  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:14:01.266754  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:14:01.305346  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:14:01.305385  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:14:01.356704  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:14:01.356745  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:14:01.370117  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:14:01.370149  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:14:01.440661  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.198044  152550 pod_ready.go:82] duration metric: took 4m0.000989551s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	E0826 12:13:58.198094  152550 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:13:58.198117  152550 pod_ready.go:39] duration metric: took 4m12.634931094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:13:58.198155  152550 kubeadm.go:597] duration metric: took 4m20.008849713s to restartPrimaryControlPlane
	W0826 12:13:58.198303  152550 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:13:58.198455  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:00.756478  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.253496  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.941691  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:03.956386  152982 kubeadm.go:597] duration metric: took 4m3.440941217s to restartPrimaryControlPlane
	W0826 12:14:03.956466  152982 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:03.956493  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:04.426489  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:04.441881  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:04.452877  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:04.463304  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:04.463332  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:04.463380  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:04.473208  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:04.473290  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:04.483666  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:04.494051  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:04.494177  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:04.504320  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.514099  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:04.514174  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.524235  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:04.533899  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:04.533984  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:04.544851  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:04.618397  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:14:04.618498  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:04.760383  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:04.760547  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:04.760690  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:14:04.953284  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:02.622852  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:05.122408  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:04.955371  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:04.955481  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:04.955563  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:04.955664  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:04.955738  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:04.955850  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:04.955953  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:04.956047  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:04.956133  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:04.956239  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:04.956306  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:04.956366  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:04.956455  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:05.401019  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:05.543601  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:05.641242  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:05.716524  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:05.737543  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:05.739428  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:05.739530  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:05.887203  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:05.889144  152982 out.go:235]   - Booting up control plane ...
	I0826 12:14:05.889288  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:05.891248  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:05.892518  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:05.894610  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:05.899134  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:14:05.753455  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.754033  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.622166  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:09.623006  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:10.253568  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.255058  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.122796  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.622774  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.753807  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.253632  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.254808  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.123304  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.622567  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.257450  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.752912  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.623069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.624561  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.253685  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.752880  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.122470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.623195  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:29.414342  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.215853526s)
	I0826 12:14:29.414450  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:29.436730  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:29.449421  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:29.462320  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:29.462349  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:29.462411  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:29.473119  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:29.473189  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:29.493795  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:29.516473  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:29.516563  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:29.528887  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.537934  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:29.538011  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.548384  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:29.557588  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:29.557659  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:29.567544  152550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:29.611274  152550 kubeadm.go:310] W0826 12:14:29.589660    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.612346  152550 kubeadm.go:310] W0826 12:14:29.590990    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.731352  152550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:14:30.755803  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.252679  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:31.123036  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.623654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:35.623993  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:38.120098  152550 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:14:38.120187  152550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:38.120283  152550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:38.120428  152550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:38.120548  152550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:14:38.120643  152550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:38.122417  152550 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:38.122519  152550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:38.122590  152550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:38.122681  152550 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:38.122766  152550 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:38.122884  152550 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:38.122960  152550 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:38.123047  152550 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:38.123146  152550 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:38.123242  152550 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:38.123316  152550 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:38.123350  152550 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:38.123394  152550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:38.123481  152550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:38.123531  152550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:14:38.123602  152550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:38.123656  152550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:38.123702  152550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:38.123770  152550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:38.123830  152550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:38.126005  152550 out.go:235]   - Booting up control plane ...
	I0826 12:14:38.126111  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:38.126209  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:38.126293  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:38.126433  152550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:38.126541  152550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:38.126619  152550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:38.126796  152550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:14:38.126975  152550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:14:38.127064  152550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001663066s
	I0826 12:14:38.127156  152550 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:14:38.127239  152550 kubeadm.go:310] [api-check] The API server is healthy after 4.502197821s
	I0826 12:14:38.127376  152550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:14:38.127527  152550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:14:38.127622  152550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:14:38.127799  152550 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-923586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:14:38.127882  152550 kubeadm.go:310] [bootstrap-token] Using token: uk5nes.r9l047sx2ciq7ja8
	I0826 12:14:38.129135  152550 out.go:235]   - Configuring RBAC rules ...
	I0826 12:14:38.129255  152550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:14:38.129363  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:14:38.129493  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:14:38.129668  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:14:38.129810  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:14:38.129908  152550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:14:38.130016  152550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:14:38.130071  152550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:14:38.130114  152550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:14:38.130120  152550 kubeadm.go:310] 
	I0826 12:14:38.130173  152550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:14:38.130178  152550 kubeadm.go:310] 
	I0826 12:14:38.130239  152550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:14:38.130249  152550 kubeadm.go:310] 
	I0826 12:14:38.130269  152550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:14:38.130340  152550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:14:38.130414  152550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:14:38.130424  152550 kubeadm.go:310] 
	I0826 12:14:38.130501  152550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:14:38.130515  152550 kubeadm.go:310] 
	I0826 12:14:38.130583  152550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:14:38.130595  152550 kubeadm.go:310] 
	I0826 12:14:38.130676  152550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:14:38.130774  152550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:14:38.130889  152550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:14:38.130898  152550 kubeadm.go:310] 
	I0826 12:14:38.130984  152550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:14:38.131067  152550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:14:38.131086  152550 kubeadm.go:310] 
	I0826 12:14:38.131158  152550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131276  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:14:38.131297  152550 kubeadm.go:310] 	--control-plane 
	I0826 12:14:38.131301  152550 kubeadm.go:310] 
	I0826 12:14:38.131407  152550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:14:38.131419  152550 kubeadm.go:310] 
	I0826 12:14:38.131518  152550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131634  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:14:38.131651  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:14:38.131664  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:14:38.133846  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:14:35.752863  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.752967  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.116222  153366 pod_ready.go:82] duration metric: took 4m0.000438014s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	E0826 12:14:37.116261  153366 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:14:37.116289  153366 pod_ready.go:39] duration metric: took 4m10.542468189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:37.116344  153366 kubeadm.go:597] duration metric: took 4m19.458712933s to restartPrimaryControlPlane
	W0826 12:14:37.116458  153366 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:37.116493  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:38.135291  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:14:38.146512  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:14:38.165564  152550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:14:38.165694  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.165744  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-923586 minikube.k8s.io/updated_at=2024_08_26T12_14_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=embed-certs-923586 minikube.k8s.io/primary=true
	I0826 12:14:38.409452  152550 ops.go:34] apiserver oom_adj: -16
	I0826 12:14:38.409559  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.910300  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.410434  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.909691  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.410601  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.910375  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.410502  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.909663  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.409954  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.515793  152550 kubeadm.go:1113] duration metric: took 4.350161994s to wait for elevateKubeSystemPrivileges
	I0826 12:14:42.515834  152550 kubeadm.go:394] duration metric: took 5m4.371327443s to StartCluster
	I0826 12:14:42.515878  152550 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.515970  152550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:14:42.517781  152550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.518064  152550 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:14:42.518189  152550 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:14:42.518281  152550 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-923586"
	I0826 12:14:42.518296  152550 addons.go:69] Setting default-storageclass=true in profile "embed-certs-923586"
	I0826 12:14:42.518309  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:14:42.518339  152550 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-923586"
	W0826 12:14:42.518352  152550 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:14:42.518362  152550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-923586"
	I0826 12:14:42.518383  152550 addons.go:69] Setting metrics-server=true in profile "embed-certs-923586"
	I0826 12:14:42.518405  152550 addons.go:234] Setting addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:42.518409  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	W0826 12:14:42.518418  152550 addons.go:243] addon metrics-server should already be in state true
	I0826 12:14:42.518446  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.518852  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518865  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518829  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518905  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.519968  152550 out.go:177] * Verifying Kubernetes components...
	I0826 12:14:42.521761  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:14:42.537559  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0826 12:14:42.538127  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.538827  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.538891  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.539336  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.539636  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.540538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0826 12:14:42.540644  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0826 12:14:42.541179  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541244  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541681  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541695  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.541834  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541842  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.542936  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.542979  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.543441  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543490  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543551  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543577  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543637  152550 addons.go:234] Setting addon default-storageclass=true in "embed-certs-923586"
	W0826 12:14:42.543663  152550 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:14:42.543700  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.544040  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.544067  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.561871  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0826 12:14:42.562432  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.562957  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.562971  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.563394  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.563689  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.565675  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.565857  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0826 12:14:42.565980  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0826 12:14:42.566268  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566352  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566799  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.566815  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567209  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567364  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.567386  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567775  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567779  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.567855  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.567903  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.568183  152550 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:14:42.569717  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.569832  152550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.569854  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:14:42.569876  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.571655  152550 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:14:42.572951  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.572975  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:14:42.572988  152550 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:14:42.573009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.573393  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.573434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.573818  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.574020  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.574160  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.574454  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.576356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.576762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.576782  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.577099  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.577293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.577430  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.577564  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.586538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0826 12:14:42.587087  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.587574  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.587590  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.587849  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.588001  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.589835  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.590061  152550 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.590075  152550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:14:42.590089  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.592573  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.592861  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.592952  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.593269  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.593437  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.593541  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.593637  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.772651  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:14:42.795921  152550 node_ready.go:35] waiting up to 6m0s for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831853  152550 node_ready.go:49] node "embed-certs-923586" has status "Ready":"True"
	I0826 12:14:42.831881  152550 node_ready.go:38] duration metric: took 35.920093ms for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831893  152550 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:42.856949  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:42.924562  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.940640  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:14:42.940669  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:14:42.958680  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.975446  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:14:42.975481  152550 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:14:43.037862  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:43.037891  152550 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:14:43.105738  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:44.054921  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130312138s)
	I0826 12:14:44.054995  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055025  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096305238s)
	I0826 12:14:44.055070  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055087  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055330  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055394  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055408  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055416  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055423  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055444  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055395  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055498  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055512  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055520  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055719  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055724  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055734  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055858  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055898  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055923  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.075068  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.075100  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.075404  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.075424  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478321  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.372540463s)
	I0826 12:14:44.478382  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478402  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.478806  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.478864  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.478876  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478891  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478904  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.479161  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.479161  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.479189  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.479205  152550 addons.go:475] Verifying addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:44.482190  152550 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:14:40.254480  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:42.753499  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:45.900198  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:14:45.901204  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:45.901550  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:44.483577  152550 addons.go:510] duration metric: took 1.965385921s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:14:44.876221  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:44.876253  152550 pod_ready.go:82] duration metric: took 2.019275302s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.876270  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883514  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.883542  152550 pod_ready.go:82] duration metric: took 1.007263784s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883553  152550 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890724  152550 pod_ready.go:93] pod "etcd-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.890750  152550 pod_ready.go:82] duration metric: took 7.190212ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890760  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.754815  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.252702  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:49.254411  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.897138  152550 pod_ready.go:103] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:48.897502  152550 pod_ready.go:93] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:48.897529  152550 pod_ready.go:82] duration metric: took 3.006762275s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:48.897541  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905832  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.905858  152550 pod_ready.go:82] duration metric: took 2.008310051s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905870  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912639  152550 pod_ready.go:93] pod "kube-proxy-xnv2b" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.912679  152550 pod_ready.go:82] duration metric: took 6.793285ms for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912694  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918794  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.918819  152550 pod_ready.go:82] duration metric: took 6.117525ms for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918826  152550 pod_ready.go:39] duration metric: took 8.086922463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:50.918867  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:14:50.918928  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:50.936095  152550 api_server.go:72] duration metric: took 8.41799252s to wait for apiserver process to appear ...
	I0826 12:14:50.936126  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:14:50.936155  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:14:50.941142  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:14:50.942612  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:14:50.942653  152550 api_server.go:131] duration metric: took 6.519342ms to wait for apiserver health ...
	I0826 12:14:50.942664  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:14:50.947646  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:14:50.947675  152550 system_pods.go:61] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:50.947680  152550 system_pods.go:61] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:50.947684  152550 system_pods.go:61] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:50.947688  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:50.947691  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:50.947694  152550 system_pods.go:61] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:50.947699  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:50.947705  152550 system_pods.go:61] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:50.947709  152550 system_pods.go:61] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:50.947717  152550 system_pods.go:74] duration metric: took 5.046771ms to wait for pod list to return data ...
	I0826 12:14:50.947723  152550 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:14:50.950716  152550 default_sa.go:45] found service account: "default"
	I0826 12:14:50.950744  152550 default_sa.go:55] duration metric: took 3.014513ms for default service account to be created ...
	I0826 12:14:50.950756  152550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:14:51.063812  152550 system_pods.go:86] 9 kube-system pods found
	I0826 12:14:51.063849  152550 system_pods.go:89] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:51.063858  152550 system_pods.go:89] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:51.063864  152550 system_pods.go:89] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:51.063869  152550 system_pods.go:89] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:51.063875  152550 system_pods.go:89] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:51.063880  152550 system_pods.go:89] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:51.063886  152550 system_pods.go:89] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:51.063894  152550 system_pods.go:89] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:51.063901  152550 system_pods.go:89] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:51.063914  152550 system_pods.go:126] duration metric: took 113.151196ms to wait for k8s-apps to be running ...
	I0826 12:14:51.063925  152550 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:14:51.063978  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:51.079783  152550 system_svc.go:56] duration metric: took 15.845401ms WaitForService to wait for kubelet
	I0826 12:14:51.079821  152550 kubeadm.go:582] duration metric: took 8.56172531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:14:51.079848  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:14:51.262166  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:14:51.262194  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:14:51.262233  152550 node_conditions.go:105] duration metric: took 182.377973ms to run NodePressure ...
	I0826 12:14:51.262248  152550 start.go:241] waiting for startup goroutines ...
	I0826 12:14:51.262258  152550 start.go:246] waiting for cluster config update ...
	I0826 12:14:51.262272  152550 start.go:255] writing updated cluster config ...
	I0826 12:14:51.262587  152550 ssh_runner.go:195] Run: rm -f paused
	I0826 12:14:51.317881  152550 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:14:51.319950  152550 out.go:177] * Done! kubectl is now configured to use "embed-certs-923586" cluster and "default" namespace by default
	I0826 12:14:50.901903  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:50.902179  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:51.256756  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:53.755801  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:56.253848  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:58.254315  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:00.902494  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:00.902754  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:03.257214  153366 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140694693s)
	I0826 12:15:03.257298  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:03.273530  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:03.284370  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:03.294199  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:03.294221  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:03.294270  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:15:03.303856  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:03.303938  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:03.313935  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:15:03.323395  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:03.323477  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:03.333728  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.343369  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:03.343452  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.353456  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:15:03.363384  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:03.363472  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:03.373738  153366 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:03.422068  153366 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:03.422173  153366 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:03.535516  153366 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:03.535649  153366 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:03.535775  153366 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:03.550873  153366 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:03.552861  153366 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:03.552969  153366 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:03.553038  153366 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:03.553138  153366 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:03.553218  153366 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:03.553319  153366 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:03.553385  153366 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:03.553462  153366 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:03.553536  153366 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:03.553674  153366 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:03.553810  153366 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:03.553854  153366 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:03.553906  153366 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:03.650986  153366 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:03.737989  153366 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:03.981919  153366 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:04.322809  153366 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:04.378495  153366 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:04.379108  153366 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:04.382061  153366 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:00.753091  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:02.753181  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:04.384093  153366 out.go:235]   - Booting up control plane ...
	I0826 12:15:04.384215  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:04.384313  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:04.384401  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:04.405533  153366 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:04.411925  153366 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:04.411998  153366 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:04.548438  153366 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:04.548626  153366 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:05.049451  153366 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.31099ms
	I0826 12:15:05.049526  153366 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:05.253970  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:07.753555  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.051568  153366 kubeadm.go:310] [api-check] The API server is healthy after 5.001973036s
	I0826 12:15:10.066691  153366 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:10.086381  153366 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:10.122144  153366 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:10.122349  153366 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-697869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:10.138374  153366 kubeadm.go:310] [bootstrap-token] Using token: amrfa7.mjk6u0x9vle6unng
	I0826 12:15:10.139885  153366 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:10.140032  153366 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:10.156541  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:10.167826  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:10.174587  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:10.179100  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:10.191798  153366 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:10.465168  153366 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:10.905160  153366 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:11.461111  153366 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:11.461144  153366 kubeadm.go:310] 
	I0826 12:15:11.461234  153366 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:11.461246  153366 kubeadm.go:310] 
	I0826 12:15:11.461381  153366 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:11.461404  153366 kubeadm.go:310] 
	I0826 12:15:11.461439  153366 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:11.461530  153366 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:11.461655  153366 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:11.461667  153366 kubeadm.go:310] 
	I0826 12:15:11.461761  153366 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:11.461776  153366 kubeadm.go:310] 
	I0826 12:15:11.461841  153366 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:11.461855  153366 kubeadm.go:310] 
	I0826 12:15:11.461951  153366 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:11.462070  153366 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:11.462171  153366 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:11.462181  153366 kubeadm.go:310] 
	I0826 12:15:11.462305  153366 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:11.462432  153366 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:11.462443  153366 kubeadm.go:310] 
	I0826 12:15:11.462557  153366 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.462694  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:11.462729  153366 kubeadm.go:310] 	--control-plane 
	I0826 12:15:11.462742  153366 kubeadm.go:310] 
	I0826 12:15:11.462862  153366 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:11.462879  153366 kubeadm.go:310] 
	I0826 12:15:11.463004  153366 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.463151  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:11.463695  153366 kubeadm.go:310] W0826 12:15:03.397375    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464127  153366 kubeadm.go:310] W0826 12:15:03.398283    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464277  153366 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:11.464314  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:15:11.464324  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:11.467369  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:09.754135  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.247470  152463 pod_ready.go:82] duration metric: took 4m0.000930829s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	E0826 12:15:10.247510  152463 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:15:10.247531  152463 pod_ready.go:39] duration metric: took 4m13.959337221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:10.247571  152463 kubeadm.go:597] duration metric: took 4m20.649627423s to restartPrimaryControlPlane
	W0826 12:15:10.247641  152463 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:15:10.247671  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:15:11.468809  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:11.480030  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:11.503412  153366 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:11.503518  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:11.503558  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-697869 minikube.k8s.io/updated_at=2024_08_26T12_15_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=default-k8s-diff-port-697869 minikube.k8s.io/primary=true
	I0826 12:15:11.724406  153366 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:11.724524  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.225088  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.725598  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.225161  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.724619  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.225467  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.724756  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.224733  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.724555  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.869377  153366 kubeadm.go:1113] duration metric: took 4.365927713s to wait for elevateKubeSystemPrivileges
	I0826 12:15:15.869426  153366 kubeadm.go:394] duration metric: took 4m58.261516694s to StartCluster
	I0826 12:15:15.869450  153366 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.869547  153366 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:15.872248  153366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.872615  153366 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:15.872724  153366 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:15.872819  153366 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872837  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:15.872839  153366 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872858  153366 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872872  153366 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:15.872887  153366 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872908  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872919  153366 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872927  153366 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:15.872959  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872890  153366 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-697869"
	I0826 12:15:15.873361  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873403  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873418  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873465  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.874128  153366 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:15.875341  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:15.894326  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0826 12:15:15.894578  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0826 12:15:15.895050  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895104  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38885
	I0826 12:15:15.895131  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895609  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895629  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895612  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895658  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895696  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.896010  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896059  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896145  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.896164  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.896261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.896493  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896650  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.896675  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.896977  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.897022  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.899881  153366 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.899904  153366 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:15.899935  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.900218  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.900255  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.914959  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0826 12:15:15.915525  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.915993  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.916017  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.916418  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.916451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0826 12:15:15.916588  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.916681  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0826 12:15:15.916999  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.917629  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.917643  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.918129  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.918298  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.918337  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.919305  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.919920  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.919947  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.920096  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.920226  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.920281  153366 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:15.920702  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.920724  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.921464  153366 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:15.921468  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:15.921554  153366 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:15.921575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.923028  153366 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:15.923051  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:15.923072  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.926224  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926877  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926895  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926900  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.927101  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927141  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927313  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927329  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927509  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927677  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.927774  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.945639  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0826 12:15:15.946164  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.946704  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.946726  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.947148  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.947420  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.949257  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.949524  153366 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:15.949544  153366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:15.949573  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.952861  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953407  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.953440  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953604  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.953816  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.953971  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.954108  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:16.119775  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:16.141629  153366 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167775  153366 node_ready.go:49] node "default-k8s-diff-port-697869" has status "Ready":"True"
	I0826 12:15:16.167813  153366 node_ready.go:38] duration metric: took 26.141251ms for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167823  153366 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:16.174824  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:16.265371  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:16.273443  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:16.273479  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:16.295175  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:16.301027  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:16.301063  153366 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:16.351346  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:16.351372  153366 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:16.536263  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:17.254787  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254820  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.254872  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254896  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255317  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255371  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255394  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255396  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255397  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255354  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255412  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255447  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255425  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255497  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255721  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255735  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255839  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255860  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255883  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.279566  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.279589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.279893  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.279914  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792266  153366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255954534s)
	I0826 12:15:17.792329  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792341  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792687  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.792714  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792727  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792737  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792693  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.793052  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.793070  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.793083  153366 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-697869"
	I0826 12:15:17.795156  153366 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:15:17.796583  153366 addons.go:510] duration metric: took 1.923858399s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:15:18.183088  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.682427  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.903394  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:20.903620  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:21.684011  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.684037  153366 pod_ready.go:82] duration metric: took 5.509158352s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.684047  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689145  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.689170  153366 pod_ready.go:82] duration metric: took 5.117406ms for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689180  153366 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695856  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.695897  153366 pod_ready.go:82] duration metric: took 2.006709056s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695912  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700548  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.700572  153366 pod_ready.go:82] duration metric: took 4.650988ms for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700583  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705425  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.705449  153366 pod_ready.go:82] duration metric: took 4.857442ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705461  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710336  153366 pod_ready.go:93] pod "kube-proxy-fkklg" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.710368  153366 pod_ready.go:82] duration metric: took 4.897388ms for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710380  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079760  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:24.079791  153366 pod_ready.go:82] duration metric: took 369.402007ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079803  153366 pod_ready.go:39] duration metric: took 7.911968599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:24.079826  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:24.079905  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:24.096351  153366 api_server.go:72] duration metric: took 8.22368917s to wait for apiserver process to appear ...
	I0826 12:15:24.096380  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:24.096401  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:15:24.100636  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:15:24.102197  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:24.102228  153366 api_server.go:131] duration metric: took 5.839895ms to wait for apiserver health ...
	I0826 12:15:24.102239  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:24.282080  153366 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:24.282111  153366 system_pods.go:61] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.282116  153366 system_pods.go:61] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.282120  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.282124  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.282128  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.282131  153366 system_pods.go:61] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.282134  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.282141  153366 system_pods.go:61] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.282148  153366 system_pods.go:61] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.282160  153366 system_pods.go:74] duration metric: took 179.913782ms to wait for pod list to return data ...
	I0826 12:15:24.282174  153366 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:24.478697  153366 default_sa.go:45] found service account: "default"
	I0826 12:15:24.478725  153366 default_sa.go:55] duration metric: took 196.543227ms for default service account to be created ...
	I0826 12:15:24.478735  153366 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:24.681990  153366 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:24.682024  153366 system_pods.go:89] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.682033  153366 system_pods.go:89] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.682039  153366 system_pods.go:89] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.682047  153366 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.682053  153366 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.682059  153366 system_pods.go:89] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.682064  153366 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.682074  153366 system_pods.go:89] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.682084  153366 system_pods.go:89] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.682099  153366 system_pods.go:126] duration metric: took 203.358223ms to wait for k8s-apps to be running ...
	I0826 12:15:24.682112  153366 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:24.682176  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:24.696733  153366 system_svc.go:56] duration metric: took 14.61027ms WaitForService to wait for kubelet
	I0826 12:15:24.696763  153366 kubeadm.go:582] duration metric: took 8.824109304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:24.696783  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:24.879924  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:24.879956  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:24.879966  153366 node_conditions.go:105] duration metric: took 183.178992ms to run NodePressure ...
	I0826 12:15:24.879990  153366 start.go:241] waiting for startup goroutines ...
	I0826 12:15:24.879997  153366 start.go:246] waiting for cluster config update ...
	I0826 12:15:24.880010  153366 start.go:255] writing updated cluster config ...
	I0826 12:15:24.880311  153366 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:24.930941  153366 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:24.933196  153366 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-697869" cluster and "default" namespace by default
	I0826 12:15:36.323870  152463 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.076163509s)
	I0826 12:15:36.323965  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:36.347973  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:36.368968  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:36.382879  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:36.382903  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:36.382963  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:15:36.416659  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:36.416743  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:36.429514  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:15:36.451301  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:36.451385  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:36.462051  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.472004  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:36.472067  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.482273  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:15:36.492841  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:36.492912  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:36.504817  152463 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:36.551754  152463 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:36.551829  152463 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:36.672687  152463 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:36.672864  152463 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:36.672989  152463 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:36.683235  152463 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:36.685324  152463 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:36.685440  152463 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:36.685547  152463 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:36.685629  152463 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:36.685682  152463 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:36.685739  152463 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:36.685783  152463 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:36.685831  152463 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:36.686022  152463 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:36.686468  152463 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:36.686945  152463 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:36.687303  152463 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:36.687378  152463 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:36.967134  152463 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:37.077904  152463 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:37.371185  152463 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:37.555065  152463 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:37.634464  152463 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:37.634927  152463 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:37.638560  152463 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:37.640588  152463 out.go:235]   - Booting up control plane ...
	I0826 12:15:37.640726  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:37.640832  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:37.642937  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:37.662774  152463 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:37.672492  152463 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:37.672548  152463 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:37.813958  152463 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:37.814108  152463 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:38.316718  152463 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.741081ms
	I0826 12:15:38.316861  152463 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:43.318178  152463 kubeadm.go:310] [api-check] The API server is healthy after 5.001355764s
	I0826 12:15:43.331536  152463 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:43.349535  152463 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:43.387824  152463 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:43.388114  152463 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-956479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:43.405027  152463 kubeadm.go:310] [bootstrap-token] Using token: ukbhjp.blg8kbhpg1wwmixs
	I0826 12:15:43.406880  152463 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:43.407022  152463 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:43.422870  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:43.436842  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:43.444123  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:43.454773  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:43.467173  152463 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:43.727266  152463 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:44.155916  152463 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:44.726922  152463 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:44.727276  152463 kubeadm.go:310] 
	I0826 12:15:44.727355  152463 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:44.727366  152463 kubeadm.go:310] 
	I0826 12:15:44.727452  152463 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:44.727461  152463 kubeadm.go:310] 
	I0826 12:15:44.727501  152463 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:44.727596  152463 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:44.727678  152463 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:44.727692  152463 kubeadm.go:310] 
	I0826 12:15:44.727778  152463 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:44.727803  152463 kubeadm.go:310] 
	I0826 12:15:44.727880  152463 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:44.727890  152463 kubeadm.go:310] 
	I0826 12:15:44.727958  152463 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:44.728059  152463 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:44.728157  152463 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:44.728170  152463 kubeadm.go:310] 
	I0826 12:15:44.728278  152463 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:44.728381  152463 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:44.728390  152463 kubeadm.go:310] 
	I0826 12:15:44.728500  152463 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.728621  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:44.728650  152463 kubeadm.go:310] 	--control-plane 
	I0826 12:15:44.728655  152463 kubeadm.go:310] 
	I0826 12:15:44.728763  152463 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:44.728773  152463 kubeadm.go:310] 
	I0826 12:15:44.728879  152463 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.729000  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:44.730448  152463 kubeadm.go:310] W0826 12:15:36.526674    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730826  152463 kubeadm.go:310] W0826 12:15:36.527559    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730958  152463 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:44.730985  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:15:44.731006  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:44.732918  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:44.734123  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:44.746466  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:44.766371  152463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:44.766444  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:44.766500  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-956479 minikube.k8s.io/updated_at=2024_08_26T12_15_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=no-preload-956479 minikube.k8s.io/primary=true
	I0826 12:15:44.816160  152463 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:44.979504  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.479661  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.980448  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.479729  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.980060  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.479789  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.980142  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.479669  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.567890  152463 kubeadm.go:1113] duration metric: took 3.801513957s to wait for elevateKubeSystemPrivileges
	I0826 12:15:48.567928  152463 kubeadm.go:394] duration metric: took 4m59.024259276s to StartCluster
	I0826 12:15:48.567954  152463 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.568058  152463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:48.569638  152463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.569928  152463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:48.570009  152463 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:48.570072  152463 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956479"
	I0826 12:15:48.570106  152463 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956479"
	W0826 12:15:48.570120  152463 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:48.570111  152463 addons.go:69] Setting default-storageclass=true in profile "no-preload-956479"
	I0826 12:15:48.570136  152463 addons.go:69] Setting metrics-server=true in profile "no-preload-956479"
	I0826 12:15:48.570154  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570164  152463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956479"
	I0826 12:15:48.570168  152463 addons.go:234] Setting addon metrics-server=true in "no-preload-956479"
	W0826 12:15:48.570179  152463 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:48.570189  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:48.570209  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570485  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570551  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570575  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570609  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570621  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570654  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.572265  152463 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:48.573970  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:48.587085  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0826 12:15:48.587132  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0826 12:15:48.587291  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0826 12:15:48.587551  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.587597  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588312  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588331  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588376  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588491  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588509  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588696  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588878  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588965  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588978  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.589237  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589273  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589402  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589427  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589780  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.590142  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.593429  152463 addons.go:234] Setting addon default-storageclass=true in "no-preload-956479"
	W0826 12:15:48.593450  152463 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:48.593479  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.593765  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.593796  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.606920  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0826 12:15:48.607123  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0826 12:15:48.607641  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.607775  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.608233  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608253  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608389  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608401  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608881  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609068  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.609126  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609286  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.611449  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0826 12:15:48.611638  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612161  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612164  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.612932  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.612954  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.613327  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.613815  152463 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:48.614020  152463 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:48.614913  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.614969  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.615993  152463 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:48.616019  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:48.616035  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.616812  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:48.616831  152463 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:48.616854  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.619999  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.620553  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.620591  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.621629  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.621699  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621845  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.621868  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621914  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622126  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.622296  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.622459  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622662  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.622728  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.633310  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0826 12:15:48.633834  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.634438  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.634492  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.634892  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.635131  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.636967  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.637184  152463 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.637204  152463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:48.637225  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.640306  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.640677  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.640710  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.641042  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.641260  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.641483  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.641743  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.771258  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:48.788808  152463 node_ready.go:35] waiting up to 6m0s for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800881  152463 node_ready.go:49] node "no-preload-956479" has status "Ready":"True"
	I0826 12:15:48.800916  152463 node_ready.go:38] duration metric: took 12.068483ms for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800926  152463 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:48.806760  152463 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:48.859878  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:48.859902  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:48.863874  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.884910  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:48.884940  152463 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:48.905108  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.905139  152463 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:48.929466  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.968025  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:49.143607  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.143634  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.143980  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.144039  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144048  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144056  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.144063  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.144396  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144421  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144399  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177127  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.177157  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.177586  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177590  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.177610  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170421  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240899569s)
	I0826 12:15:50.170493  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170509  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.170879  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.170896  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.170919  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170934  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170947  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.171212  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.171232  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.171278  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.171294  152463 addons.go:475] Verifying addon metrics-server=true in "no-preload-956479"
	I0826 12:15:50.240347  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.272272683s)
	I0826 12:15:50.240403  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240416  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.240837  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.240861  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.240867  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.240871  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240906  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.241192  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.241208  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.243352  152463 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0826 12:15:50.244857  152463 addons.go:510] duration metric: took 1.674848626s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0826 12:15:50.821689  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:53.313148  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:54.313605  152463 pod_ready.go:93] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:54.313634  152463 pod_ready.go:82] duration metric: took 5.506845108s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:54.313646  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.320782  152463 pod_ready.go:103] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:56.822596  152463 pod_ready.go:93] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.822626  152463 pod_ready.go:82] duration metric: took 2.508972184s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.822652  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829472  152463 pod_ready.go:93] pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.829497  152463 pod_ready.go:82] duration metric: took 6.836827ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829508  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835063  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.835087  152463 pod_ready.go:82] duration metric: took 5.573211ms for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835095  152463 pod_ready.go:39] duration metric: took 8.03415934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:56.835111  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:56.835162  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:56.852565  152463 api_server.go:72] duration metric: took 8.282599518s to wait for apiserver process to appear ...
	I0826 12:15:56.852595  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:56.852614  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:15:56.857431  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:15:56.858525  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:56.858548  152463 api_server.go:131] duration metric: took 5.945927ms to wait for apiserver health ...
	I0826 12:15:56.858556  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:56.863726  152463 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:56.863750  152463 system_pods.go:61] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.863757  152463 system_pods.go:61] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.863762  152463 system_pods.go:61] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.863768  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.863773  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.863776  152463 system_pods.go:61] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.863780  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.863784  152463 system_pods.go:61] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.863788  152463 system_pods.go:61] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.863794  152463 system_pods.go:74] duration metric: took 5.233096ms to wait for pod list to return data ...
	I0826 12:15:56.863801  152463 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:56.866245  152463 default_sa.go:45] found service account: "default"
	I0826 12:15:56.866263  152463 default_sa.go:55] duration metric: took 2.456594ms for default service account to be created ...
	I0826 12:15:56.866270  152463 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:56.870592  152463 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:56.870614  152463 system_pods.go:89] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.870621  152463 system_pods.go:89] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.870626  152463 system_pods.go:89] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.870634  152463 system_pods.go:89] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.870640  152463 system_pods.go:89] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.870645  152463 system_pods.go:89] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.870656  152463 system_pods.go:89] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.870663  152463 system_pods.go:89] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.870673  152463 system_pods.go:89] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.870681  152463 system_pods.go:126] duration metric: took 4.405758ms to wait for k8s-apps to be running ...
	I0826 12:15:56.870688  152463 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:56.870736  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:56.886533  152463 system_svc.go:56] duration metric: took 15.833026ms WaitForService to wait for kubelet
	I0826 12:15:56.886582  152463 kubeadm.go:582] duration metric: took 8.316620619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:56.886607  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:56.895864  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:56.895902  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:56.895917  152463 node_conditions.go:105] duration metric: took 9.302123ms to run NodePressure ...
	I0826 12:15:56.895934  152463 start.go:241] waiting for startup goroutines ...
	I0826 12:15:56.895945  152463 start.go:246] waiting for cluster config update ...
	I0826 12:15:56.895960  152463 start.go:255] writing updated cluster config ...
	I0826 12:15:56.896336  152463 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:56.947198  152463 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:56.949119  152463 out.go:177] * Done! kubectl is now configured to use "no-preload-956479" cluster and "default" namespace by default
	I0826 12:16:00.905372  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:00.905692  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:00.905720  152982 kubeadm.go:310] 
	I0826 12:16:00.905753  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:16:00.905784  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:16:00.905791  152982 kubeadm.go:310] 
	I0826 12:16:00.905819  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:16:00.905877  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:16:00.906033  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:16:00.906050  152982 kubeadm.go:310] 
	I0826 12:16:00.906190  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:16:00.906257  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:16:00.906304  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:16:00.906311  152982 kubeadm.go:310] 
	I0826 12:16:00.906444  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:16:00.906687  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:16:00.906700  152982 kubeadm.go:310] 
	I0826 12:16:00.906794  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:16:00.906945  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:16:00.907050  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:16:00.907167  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:16:00.907184  152982 kubeadm.go:310] 
	I0826 12:16:00.907768  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:16:00.907869  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:16:00.907959  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 12:16:00.908103  152982 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 12:16:00.908168  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:16:01.392633  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:16:01.408303  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:16:01.419069  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:16:01.419104  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:16:01.419162  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:16:01.429440  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:16:01.429513  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:16:01.440092  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:16:01.450451  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:16:01.450528  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:16:01.461166  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.472084  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:16:01.472155  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.482791  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:16:01.493636  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:16:01.493737  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:16:01.504679  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:16:01.576700  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:16:01.576854  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:16:01.728501  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:16:01.728682  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:16:01.728846  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:16:01.928072  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:16:01.929877  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:16:01.929988  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:16:01.930128  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:16:01.930271  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:16:01.930373  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:16:01.930484  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:16:01.930593  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:16:01.930680  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:16:01.930766  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:16:01.931012  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:16:01.931363  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:16:01.931414  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:16:01.931593  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:16:02.054133  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:16:02.301995  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:16:02.372665  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:16:02.823940  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:16:02.844516  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:16:02.844641  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:16:02.844724  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:16:02.995838  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:16:02.997571  152982 out.go:235]   - Booting up control plane ...
	I0826 12:16:02.997707  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:16:02.999055  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:16:03.000691  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:16:03.010427  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:16:03.013494  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:16:43.016147  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:16:43.016271  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:43.016481  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:48.016709  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:48.016976  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:58.017776  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:58.018006  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:18.018369  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:18.018592  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.017759  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:58.018053  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.018084  152982 kubeadm.go:310] 
	I0826 12:17:58.018121  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:17:58.018157  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:17:58.018163  152982 kubeadm.go:310] 
	I0826 12:17:58.018192  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:17:58.018224  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:17:58.018310  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:17:58.018337  152982 kubeadm.go:310] 
	I0826 12:17:58.018477  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:17:58.018537  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:17:58.018619  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:17:58.018633  152982 kubeadm.go:310] 
	I0826 12:17:58.018723  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:17:58.018810  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:17:58.018820  152982 kubeadm.go:310] 
	I0826 12:17:58.019007  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:17:58.019157  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:17:58.019291  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:17:58.019403  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:17:58.019414  152982 kubeadm.go:310] 
	I0826 12:17:58.020426  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:17:58.020541  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:17:58.020627  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 12:17:58.020705  152982 kubeadm.go:394] duration metric: took 7m57.559327665s to StartCluster
	I0826 12:17:58.020799  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:17:58.020875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:17:58.061950  152982 cri.go:89] found id: ""
	I0826 12:17:58.061979  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.061989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:17:58.061998  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:17:58.062057  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:17:58.100419  152982 cri.go:89] found id: ""
	I0826 12:17:58.100451  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.100465  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:17:58.100474  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:17:58.100536  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:17:58.135329  152982 cri.go:89] found id: ""
	I0826 12:17:58.135360  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.135369  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:17:58.135378  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:17:58.135472  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:17:58.169826  152982 cri.go:89] found id: ""
	I0826 12:17:58.169858  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.169870  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:17:58.169888  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:17:58.169958  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:17:58.204549  152982 cri.go:89] found id: ""
	I0826 12:17:58.204583  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.204593  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:17:58.204600  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:17:58.204668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:17:58.241886  152982 cri.go:89] found id: ""
	I0826 12:17:58.241917  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.241926  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:17:58.241933  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:17:58.241997  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:17:58.276159  152982 cri.go:89] found id: ""
	I0826 12:17:58.276194  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.276206  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:17:58.276220  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:17:58.276288  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:17:58.311319  152982 cri.go:89] found id: ""
	I0826 12:17:58.311352  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.311364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:17:58.311377  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:17:58.311394  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:17:58.365300  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:17:58.365352  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:17:58.378933  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:17:58.378972  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:17:58.464890  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:17:58.464920  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:17:58.464939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:17:58.581032  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:17:58.581076  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0826 12:17:58.633835  152982 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 12:17:58.633919  152982 out.go:270] * 
	W0826 12:17:58.634025  152982 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.634049  152982 out.go:270] * 
	W0826 12:17:58.635201  152982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:17:58.639004  152982 out.go:201] 
	W0826 12:17:58.640230  152982 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.640308  152982 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 12:17:58.640327  152982 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 12:17:58.641876  152982 out.go:201] 
	
	
	==> CRI-O <==
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.485482211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675033485454217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9edff036-9ebf-4fe2-bdb4-db89767949c6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.486140673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99267c93-b079-4588-b851-040c9bde96ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.486199737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99267c93-b079-4588-b851-040c9bde96ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.487107052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1,PodSandboxId:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674484503953305,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7,PodSandboxId:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484112152089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c,PodSandboxId:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484006082733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
cc20f31-6d6c-4104-93c3-29c1b94de93c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53,PodSandboxId:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724674483191229626,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54,PodSandboxId:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674472509376763,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569,PodSandboxId:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674472507858419,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0,PodSandboxId:bc83f2c08a3238a4e1efabba74708cc62077b6de0debf8a7469b9636662d21e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674472483443416,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68,PodSandboxId:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674472407626934,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1,PodSandboxId:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674180742107584,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99267c93-b079-4588-b851-040c9bde96ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.529897476Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f475f667-bd2e-4cd0-bda4-694c156642f0 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.530115410Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f475f667-bd2e-4cd0-bda4-694c156642f0 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.531348999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acff40ed-b374-4e39-97a2-4b8c9667caa5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.531745535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675033531720485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acff40ed-b374-4e39-97a2-4b8c9667caa5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.532370076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eaeb3245-aa74-4435-807e-e2da62bec80a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.532443807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eaeb3245-aa74-4435-807e-e2da62bec80a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.532680420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1,PodSandboxId:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674484503953305,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7,PodSandboxId:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484112152089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c,PodSandboxId:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484006082733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
cc20f31-6d6c-4104-93c3-29c1b94de93c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53,PodSandboxId:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724674483191229626,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54,PodSandboxId:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674472509376763,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569,PodSandboxId:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674472507858419,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0,PodSandboxId:bc83f2c08a3238a4e1efabba74708cc62077b6de0debf8a7469b9636662d21e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674472483443416,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68,PodSandboxId:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674472407626934,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1,PodSandboxId:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674180742107584,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eaeb3245-aa74-4435-807e-e2da62bec80a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.567724741Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c463d49-e070-4d4a-9e99-0c6c2d8fda7a name=/runtime.v1.RuntimeService/Version
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.567813668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c463d49-e070-4d4a-9e99-0c6c2d8fda7a name=/runtime.v1.RuntimeService/Version
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.568853459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f912136-c243-4bd8-8c38-4585f41f5ebf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.569313878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675033569292078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f912136-c243-4bd8-8c38-4585f41f5ebf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.569914850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f518a421-1d0c-4681-a5e6-8197bce74c52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.569980100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f518a421-1d0c-4681-a5e6-8197bce74c52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.570281150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1,PodSandboxId:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674484503953305,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7,PodSandboxId:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484112152089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c,PodSandboxId:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484006082733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
cc20f31-6d6c-4104-93c3-29c1b94de93c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53,PodSandboxId:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724674483191229626,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54,PodSandboxId:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674472509376763,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569,PodSandboxId:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674472507858419,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0,PodSandboxId:bc83f2c08a3238a4e1efabba74708cc62077b6de0debf8a7469b9636662d21e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674472483443416,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68,PodSandboxId:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674472407626934,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1,PodSandboxId:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674180742107584,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f518a421-1d0c-4681-a5e6-8197bce74c52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.605110196Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6eb40c76-29a8-4157-9755-3617fef15cda name=/runtime.v1.RuntimeService/Version
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.605194990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6eb40c76-29a8-4157-9755-3617fef15cda name=/runtime.v1.RuntimeService/Version
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.606397440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64f14a55-bd2e-4c72-abfb-d55379b81e2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.606832684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675033606808953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64f14a55-bd2e-4c72-abfb-d55379b81e2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.607419990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89f7b07e-e740-45fa-b463-88f0ced00534 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.607500427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89f7b07e-e740-45fa-b463-88f0ced00534 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:23:53 embed-certs-923586 crio[759]: time="2024-08-26 12:23:53.607715010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1,PodSandboxId:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674484503953305,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7,PodSandboxId:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484112152089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c,PodSandboxId:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484006082733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
cc20f31-6d6c-4104-93c3-29c1b94de93c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53,PodSandboxId:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724674483191229626,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54,PodSandboxId:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674472509376763,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569,PodSandboxId:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674472507858419,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0,PodSandboxId:bc83f2c08a3238a4e1efabba74708cc62077b6de0debf8a7469b9636662d21e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674472483443416,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68,PodSandboxId:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674472407626934,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1,PodSandboxId:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674180742107584,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89f7b07e-e740-45fa-b463-88f0ced00534 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f40a433b56c54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   95ba53d3d629c       storage-provisioner
	c045d48a96954       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   11bfddc0d69c5       coredns-6f6b679f8f-dhm6d
	d197649fa398f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8648db7ef81da       coredns-6f6b679f8f-5tpbm
	18f87b3516f38       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   de50a633d662b       kube-proxy-xnv2b
	aef384d663a79       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   705db35e71bc9       etcd-embed-certs-923586
	70ed553437bb6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   f5205492b4248       kube-apiserver-embed-certs-923586
	12139aa5cc435       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   bc83f2c08a323       kube-controller-manager-embed-certs-923586
	85351f65cbd4e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   f9036d4abc012       kube-scheduler-embed-certs-923586
	75c024c43279a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   13c46e49fe89b       kube-apiserver-embed-certs-923586
	
	
	==> coredns [c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-923586
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-923586
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=embed-certs-923586
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T12_14_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 12:14:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-923586
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:23:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:19:54 +0000   Mon, 26 Aug 2024 12:14:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:19:54 +0000   Mon, 26 Aug 2024 12:14:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:19:54 +0000   Mon, 26 Aug 2024 12:14:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:19:54 +0000   Mon, 26 Aug 2024 12:14:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    embed-certs-923586
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6564d7e8f389450fb4fe90c3322850d2
	  System UUID:                6564d7e8-f389-450f-b4fe-90c3322850d2
	  Boot ID:                    ae96d933-1d15-4391-92f0-4db7ffbeb091
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5tpbm                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-6f6b679f8f-dhm6d                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-embed-certs-923586                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-embed-certs-923586             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-embed-certs-923586    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-xnv2b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-embed-certs-923586             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-k6mkf               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node embed-certs-923586 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node embed-certs-923586 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node embed-certs-923586 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node embed-certs-923586 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node embed-certs-923586 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node embed-certs-923586 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node embed-certs-923586 event: Registered Node embed-certs-923586 in Controller
	
	
	==> dmesg <==
	[  +0.037979] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.746956] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.936129] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.553525] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.647179] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.056376] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061544] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.201679] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +0.130781] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +0.319729] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[  +4.213954] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.067298] kauditd_printk_skb: 154 callbacks suppressed
	[  +2.226302] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +4.582380] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.837328] kauditd_printk_skb: 109 callbacks suppressed
	[Aug26 12:13] kauditd_printk_skb: 2 callbacks suppressed
	[Aug26 12:14] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.068130] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.009437] systemd-fstab-generator[3168]: Ignoring "noauto" option for root device
	[  +0.102820] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.343563] systemd-fstab-generator[3283]: Ignoring "noauto" option for root device
	[  +0.118670] kauditd_printk_skb: 12 callbacks suppressed
	[Aug26 12:15] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54] <==
	{"level":"info","ts":"2024-08-26T12:14:32.866326Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-08-26T12:14:32.866696Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-08-26T12:14:32.866996Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T12:14:32.876277Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6f26d2d338759d80","initial-advertise-peer-urls":["https://192.168.39.6:2380"],"listen-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T12:14:32.876960Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T12:14:33.009094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-26T12:14:33.009241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-26T12:14:33.009320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgPreVoteResp from 6f26d2d338759d80 at term 1"}
	{"level":"info","ts":"2024-08-26T12:14:33.009377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became candidate at term 2"}
	{"level":"info","ts":"2024-08-26T12:14:33.009415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgVoteResp from 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2024-08-26T12:14:33.009491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became leader at term 2"}
	{"level":"info","ts":"2024-08-26T12:14:33.009517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f26d2d338759d80 elected leader 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2024-08-26T12:14:33.014242Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6f26d2d338759d80","local-member-attributes":"{Name:embed-certs-923586 ClientURLs:[https://192.168.39.6:2379]}","request-path":"/0/members/6f26d2d338759d80/attributes","cluster-id":"1a1020f766a5ac01","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T12:14:33.014565Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:14:33.014765Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:14:33.017596Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:14:33.021779Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:14:33.038258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T12:14:33.024118Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T12:14:33.038536Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T12:14:33.024716Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:14:33.028115Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:14:33.044450Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:14:33.044511Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:14:33.080098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.6:2379"}
	
	
	==> kernel <==
	 12:23:53 up 14 min,  0 users,  load average: 0.19, 0.21, 0.18
	Linux embed-certs-923586 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569] <==
	E0826 12:19:35.904325       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0826 12:19:35.904403       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:19:35.905527       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:19:35.905601       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:20:35.906221       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:20:35.906284       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0826 12:20:35.906459       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:20:35.906557       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:20:35.907392       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:20:35.908531       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:22:35.908282       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:22:35.908389       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0826 12:22:35.909431       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:22:35.909567       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:22:35.909670       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:22:35.910879       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1] <==
	W0826 12:14:27.212572       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.265769       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.324642       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.354677       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.418406       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.432293       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.476133       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.476133       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.514766       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.550380       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.595233       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.637556       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.776434       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.809423       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.842554       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.961287       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.049768       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.066628       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.090308       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.091670       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.124425       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.158843       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.192450       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.316145       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.439167       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0] <==
	E0826 12:18:41.830196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:18:42.269434       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:19:11.836882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:19:12.278099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:19:41.843322       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:19:42.285434       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:19:54.813935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-923586"
	E0826 12:20:11.849921       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:20:12.295225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:20:31.508357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="412.395µs"
	E0826 12:20:41.856940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:20:42.305467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:20:44.493786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="74.976µs"
	E0826 12:21:11.864323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:21:12.313939       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:21:41.871634       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:21:42.321835       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:22:11.878851       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:22:12.329950       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:22:41.886541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:22:42.338746       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:23:11.894198       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:23:12.347860       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:23:41.900937       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:23:42.357087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 12:14:43.754958       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 12:14:43.770406       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0826 12:14:43.770539       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 12:14:43.965547       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 12:14:43.965595       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 12:14:43.970132       1 server_linux.go:169] "Using iptables Proxier"
	I0826 12:14:43.989208       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 12:14:43.989497       1 server.go:483] "Version info" version="v1.31.0"
	I0826 12:14:43.989525       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:14:43.998910       1 config.go:197] "Starting service config controller"
	I0826 12:14:43.998944       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 12:14:43.998984       1 config.go:104] "Starting endpoint slice config controller"
	I0826 12:14:43.998988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 12:14:44.002358       1 config.go:326] "Starting node config controller"
	I0826 12:14:44.003947       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 12:14:44.099948       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 12:14:44.100087       1 shared_informer.go:320] Caches are synced for service config
	I0826 12:14:44.107697       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68] <==
	W0826 12:14:34.902602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 12:14:34.902792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:34.903002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 12:14:34.903086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.717749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 12:14:35.717956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.786826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 12:14:35.787368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.788692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 12:14:35.788875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.835838       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 12:14:35.836246       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 12:14:35.881367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 12:14:35.881593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.896591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 12:14:35.897318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.939364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0826 12:14:35.939656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:36.093198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 12:14:36.093306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:36.121125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0826 12:14:36.121313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:36.245759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 12:14:36.245976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0826 12:14:38.473990       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:22:46 embed-certs-923586 kubelet[3174]: E0826 12:22:46.478315    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:22:47 embed-certs-923586 kubelet[3174]: E0826 12:22:47.685271    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674967684757103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:22:47 embed-certs-923586 kubelet[3174]: E0826 12:22:47.685859    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674967684757103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:22:57 embed-certs-923586 kubelet[3174]: E0826 12:22:57.688495    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674977687918383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:22:57 embed-certs-923586 kubelet[3174]: E0826 12:22:57.689261    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674977687918383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:00 embed-certs-923586 kubelet[3174]: E0826 12:23:00.479411    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:23:07 embed-certs-923586 kubelet[3174]: E0826 12:23:07.691370    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674987690944396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:07 embed-certs-923586 kubelet[3174]: E0826 12:23:07.691409    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674987690944396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:14 embed-certs-923586 kubelet[3174]: E0826 12:23:14.477436    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:23:17 embed-certs-923586 kubelet[3174]: E0826 12:23:17.692746    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674997692393785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:17 embed-certs-923586 kubelet[3174]: E0826 12:23:17.692780    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724674997692393785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:26 embed-certs-923586 kubelet[3174]: E0826 12:23:26.477892    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:23:27 embed-certs-923586 kubelet[3174]: E0826 12:23:27.695343    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675007694858811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:27 embed-certs-923586 kubelet[3174]: E0826 12:23:27.695395    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675007694858811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:37 embed-certs-923586 kubelet[3174]: E0826 12:23:37.499270    3174 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 12:23:37 embed-certs-923586 kubelet[3174]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 12:23:37 embed-certs-923586 kubelet[3174]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 12:23:37 embed-certs-923586 kubelet[3174]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 12:23:37 embed-certs-923586 kubelet[3174]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 12:23:37 embed-certs-923586 kubelet[3174]: E0826 12:23:37.697739    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675017697293462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:37 embed-certs-923586 kubelet[3174]: E0826 12:23:37.697791    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675017697293462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:38 embed-certs-923586 kubelet[3174]: E0826 12:23:38.477644    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:23:47 embed-certs-923586 kubelet[3174]: E0826 12:23:47.699677    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675027699212502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:47 embed-certs-923586 kubelet[3174]: E0826 12:23:47.699723    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675027699212502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:49 embed-certs-923586 kubelet[3174]: E0826 12:23:49.478647    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	
	
	==> storage-provisioner [f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1] <==
	I0826 12:14:44.594429       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 12:14:44.621641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 12:14:44.621702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 12:14:44.692318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 12:14:44.692519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-923586_746dd995-b354-4c81-89b3-3df0e4ac3edc!
	I0826 12:14:44.694919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afa6ca5f-0150-4138-b04c-b2f58ecad9f9", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-923586_746dd995-b354-4c81-89b3-3df0e4ac3edc became leader
	I0826 12:14:44.795805       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-923586_746dd995-b354-4c81-89b3-3df0e4ac3edc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-923586 -n embed-certs-923586
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-923586 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-k6mkf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-923586 describe pod metrics-server-6867b74b74-k6mkf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-923586 describe pod metrics-server-6867b74b74-k6mkf: exit status 1 (64.921577ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-k6mkf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-923586 describe pod metrics-server-6867b74b74-k6mkf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-26 12:24:25.485631498 +0000 UTC m=+5870.811297221
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-697869 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-697869 logs -n 25: (2.164674953s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-585941                                        | pause-585941                 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:01 UTC | 26 Aug 24 12:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956479             | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-923586            | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148783 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	|         | disable-driver-mounts-148783                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:04 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-839656        | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-697869  | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956479                  | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-923586                 | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-839656             | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697869       | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC | 26 Aug 24 12:15 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:06:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:06:55.804794  153366 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:06:55.805114  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805125  153366 out.go:358] Setting ErrFile to fd 2...
	I0826 12:06:55.805129  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805378  153366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:06:55.806009  153366 out.go:352] Setting JSON to false
	I0826 12:06:55.806989  153366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6557,"bootTime":1724667459,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:06:55.807056  153366 start.go:139] virtualization: kvm guest
	I0826 12:06:55.809200  153366 out.go:177] * [default-k8s-diff-port-697869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:06:55.810757  153366 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:06:55.810779  153366 notify.go:220] Checking for updates...
	I0826 12:06:55.813352  153366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:06:55.814876  153366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:06:55.816231  153366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:06:55.817536  153366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:06:55.819049  153366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:06:55.820974  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:06:55.821368  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.821428  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.837973  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0826 12:06:55.838484  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.839113  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.839132  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.839537  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.839758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.840059  153366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:06:55.840392  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.840446  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.855990  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0826 12:06:55.856535  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.857044  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.857070  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.857398  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.857606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.892165  153366 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:06:55.893462  153366 start.go:297] selected driver: kvm2
	I0826 12:06:55.893491  153366 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.893612  153366 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:06:55.894295  153366 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.894372  153366 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:06:55.911403  153366 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:06:55.911782  153366 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:06:55.911825  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:06:55.911833  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:06:55.911942  153366 start.go:340] cluster config:
	{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.912047  153366 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.914819  153366 out.go:177] * Starting "default-k8s-diff-port-697869" primary control-plane node in "default-k8s-diff-port-697869" cluster
	I0826 12:06:58.095139  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:06:55.916120  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:06:55.916158  153366 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:06:55.916168  153366 cache.go:56] Caching tarball of preloaded images
	I0826 12:06:55.916249  153366 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:06:55.916260  153366 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:06:55.916361  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:06:55.916578  153366 start.go:360] acquireMachinesLock for default-k8s-diff-port-697869: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:07:01.167159  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:07.247157  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:10.319093  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:16.399177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:19.471168  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:25.551154  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:28.623156  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:34.703152  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:37.775237  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:43.855164  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:46.927177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:53.007138  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:56.079172  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:02.159134  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:05.231114  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:11.311126  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:14.383170  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:20.463130  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:23.535190  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:29.615145  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:32.687246  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:38.767150  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:41.839214  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:47.919149  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:50.991177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:57.071142  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:00.143127  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:06.223158  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:09.295167  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:12.299677  152550 start.go:364] duration metric: took 4m34.363707329s to acquireMachinesLock for "embed-certs-923586"
	I0826 12:09:12.299740  152550 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:12.299746  152550 fix.go:54] fixHost starting: 
	I0826 12:09:12.300074  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:12.300107  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:12.316195  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0826 12:09:12.316679  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:12.317193  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:09:12.317222  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:12.317544  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:12.317738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:12.317890  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:09:12.319718  152550 fix.go:112] recreateIfNeeded on embed-certs-923586: state=Stopped err=<nil>
	I0826 12:09:12.319757  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	W0826 12:09:12.319928  152550 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:12.322756  152550 out.go:177] * Restarting existing kvm2 VM for "embed-certs-923586" ...
	I0826 12:09:12.324242  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Start
	I0826 12:09:12.324436  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring networks are active...
	I0826 12:09:12.325340  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network default is active
	I0826 12:09:12.325727  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network mk-embed-certs-923586 is active
	I0826 12:09:12.326016  152550 main.go:141] libmachine: (embed-certs-923586) Getting domain xml...
	I0826 12:09:12.326704  152550 main.go:141] libmachine: (embed-certs-923586) Creating domain...
	I0826 12:09:12.297008  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:12.297049  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297404  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:09:12.297433  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297769  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:09:12.299520  152463 machine.go:96] duration metric: took 4m37.402469334s to provisionDockerMachine
	I0826 12:09:12.299563  152463 fix.go:56] duration metric: took 4m37.426061512s for fixHost
	I0826 12:09:12.299570  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 4m37.426083406s
	W0826 12:09:12.299602  152463 start.go:714] error starting host: provision: host is not running
	W0826 12:09:12.299700  152463 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0826 12:09:12.299714  152463 start.go:729] Will try again in 5 seconds ...
	I0826 12:09:13.587774  152550 main.go:141] libmachine: (embed-certs-923586) Waiting to get IP...
	I0826 12:09:13.588804  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.589502  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.589606  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.589472  153863 retry.go:31] will retry after 233.612197ms: waiting for machine to come up
	I0826 12:09:13.825289  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.825694  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.825716  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.825640  153863 retry.go:31] will retry after 278.757003ms: waiting for machine to come up
	I0826 12:09:14.106215  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.106555  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.106604  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.106513  153863 retry.go:31] will retry after 438.455545ms: waiting for machine to come up
	I0826 12:09:14.546036  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.546434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.546461  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.546390  153863 retry.go:31] will retry after 471.25312ms: waiting for machine to come up
	I0826 12:09:15.019018  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.019413  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.019441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.019398  153863 retry.go:31] will retry after 547.251596ms: waiting for machine to come up
	I0826 12:09:15.568156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.568417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.568446  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.568366  153863 retry.go:31] will retry after 602.422279ms: waiting for machine to come up
	I0826 12:09:16.172056  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:16.172588  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:16.172613  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:16.172520  153863 retry.go:31] will retry after 990.562884ms: waiting for machine to come up
	I0826 12:09:17.164920  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:17.165417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:17.165441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:17.165361  153863 retry.go:31] will retry after 1.291254906s: waiting for machine to come up
	I0826 12:09:17.301413  152463 start.go:360] acquireMachinesLock for no-preload-956479: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:09:18.458402  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:18.458881  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:18.458913  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:18.458796  153863 retry.go:31] will retry after 1.757955514s: waiting for machine to come up
	I0826 12:09:20.218876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:20.219306  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:20.219329  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:20.219276  153863 retry.go:31] will retry after 1.629705685s: waiting for machine to come up
	I0826 12:09:21.850442  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:21.850858  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:21.850889  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:21.850800  153863 retry.go:31] will retry after 2.281035685s: waiting for machine to come up
	I0826 12:09:24.133867  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:24.134245  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:24.134273  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:24.134193  153863 retry.go:31] will retry after 3.498910639s: waiting for machine to come up
	I0826 12:09:27.635304  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:27.635727  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:27.635762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:27.635665  153863 retry.go:31] will retry after 3.250723751s: waiting for machine to come up
	I0826 12:09:32.191598  152982 start.go:364] duration metric: took 3m50.364189217s to acquireMachinesLock for "old-k8s-version-839656"
	I0826 12:09:32.191690  152982 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:32.191702  152982 fix.go:54] fixHost starting: 
	I0826 12:09:32.192120  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:32.192160  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:32.209470  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0826 12:09:32.209924  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:32.210423  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:09:32.210446  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:32.210781  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:32.210982  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:32.211153  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetState
	I0826 12:09:32.212801  152982 fix.go:112] recreateIfNeeded on old-k8s-version-839656: state=Stopped err=<nil>
	I0826 12:09:32.212839  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	W0826 12:09:32.213022  152982 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:32.215081  152982 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-839656" ...
	I0826 12:09:30.890060  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890595  152550 main.go:141] libmachine: (embed-certs-923586) Found IP for machine: 192.168.39.6
	I0826 12:09:30.890628  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has current primary IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890642  152550 main.go:141] libmachine: (embed-certs-923586) Reserving static IP address...
	I0826 12:09:30.891114  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.891138  152550 main.go:141] libmachine: (embed-certs-923586) DBG | skip adding static IP to network mk-embed-certs-923586 - found existing host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"}
	I0826 12:09:30.891148  152550 main.go:141] libmachine: (embed-certs-923586) Reserved static IP address: 192.168.39.6
	I0826 12:09:30.891160  152550 main.go:141] libmachine: (embed-certs-923586) Waiting for SSH to be available...
	I0826 12:09:30.891171  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Getting to WaitForSSH function...
	I0826 12:09:30.893189  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893470  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.893500  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893616  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH client type: external
	I0826 12:09:30.893640  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa (-rw-------)
	I0826 12:09:30.893682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:30.893696  152550 main.go:141] libmachine: (embed-certs-923586) DBG | About to run SSH command:
	I0826 12:09:30.893714  152550 main.go:141] libmachine: (embed-certs-923586) DBG | exit 0
	I0826 12:09:31.014809  152550 main.go:141] libmachine: (embed-certs-923586) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:31.015188  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetConfigRaw
	I0826 12:09:31.015829  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.018458  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.018812  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.018855  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.019100  152550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/config.json ...
	I0826 12:09:31.019329  152550 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:31.019348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.019561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.021826  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022132  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.022156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.022460  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022622  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022733  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.022906  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.023108  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.023121  152550 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:31.123039  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:31.123080  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123410  152550 buildroot.go:166] provisioning hostname "embed-certs-923586"
	I0826 12:09:31.123443  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.126455  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126777  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.126814  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126922  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.127161  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127351  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127522  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.127719  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.127909  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.127924  152550 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-923586 && echo "embed-certs-923586" | sudo tee /etc/hostname
	I0826 12:09:31.240946  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-923586
	
	I0826 12:09:31.240981  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.243695  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244041  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.244079  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244240  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.244453  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244617  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244742  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.244900  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.245095  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.245113  152550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-923586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-923586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-923586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:31.355875  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:31.355909  152550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:31.355933  152550 buildroot.go:174] setting up certificates
	I0826 12:09:31.355947  152550 provision.go:84] configureAuth start
	I0826 12:09:31.355960  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.356300  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.359092  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.359407  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359596  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.362078  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362396  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.362429  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362538  152550 provision.go:143] copyHostCerts
	I0826 12:09:31.362632  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:31.362656  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:31.362743  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:31.362888  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:31.362900  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:31.362939  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:31.363021  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:31.363031  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:31.363065  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:31.363135  152550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.embed-certs-923586 san=[127.0.0.1 192.168.39.6 embed-certs-923586 localhost minikube]
	I0826 12:09:31.549410  152550 provision.go:177] copyRemoteCerts
	I0826 12:09:31.549482  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:31.549517  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.552293  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552647  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.552681  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552914  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.553119  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.553276  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.553416  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:31.633032  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:31.657117  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:09:31.680707  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:31.703441  152550 provision.go:87] duration metric: took 347.478825ms to configureAuth
	I0826 12:09:31.703477  152550 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:31.703678  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:09:31.703752  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.706384  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.706876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.706909  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.707110  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.707364  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707762  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.708005  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.708232  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.708252  152550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:31.963380  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:31.963417  152550 machine.go:96] duration metric: took 944.071305ms to provisionDockerMachine
	I0826 12:09:31.963435  152550 start.go:293] postStartSetup for "embed-certs-923586" (driver="kvm2")
	I0826 12:09:31.963452  152550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:31.963481  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.963878  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:31.963913  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.966558  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.966981  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.967010  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.967186  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.967413  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.967587  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.967732  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.049232  152550 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:32.053165  152550 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:32.053195  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:32.053278  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:32.053378  152550 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:32.053495  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:32.062420  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:32.085277  152550 start.go:296] duration metric: took 121.824784ms for postStartSetup
	I0826 12:09:32.085335  152550 fix.go:56] duration metric: took 19.785587858s for fixHost
	I0826 12:09:32.085362  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.088039  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088332  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.088360  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088560  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.088832  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089012  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089191  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.089365  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:32.089529  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:32.089539  152550 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:32.191413  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674172.168471460
	
	I0826 12:09:32.191440  152550 fix.go:216] guest clock: 1724674172.168471460
	I0826 12:09:32.191450  152550 fix.go:229] Guest: 2024-08-26 12:09:32.16847146 +0000 UTC Remote: 2024-08-26 12:09:32.085340981 +0000 UTC m=+294.301169364 (delta=83.130479ms)
	I0826 12:09:32.191485  152550 fix.go:200] guest clock delta is within tolerance: 83.130479ms
	I0826 12:09:32.191493  152550 start.go:83] releasing machines lock for "embed-certs-923586", held for 19.891774014s
	I0826 12:09:32.191526  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.191861  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:32.194589  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.194980  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.195019  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.195207  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.195866  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196071  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196167  152550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:32.196288  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.196319  152550 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:32.196348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.199088  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199546  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.199598  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199776  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.199977  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200105  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.200124  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.200148  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200317  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.200367  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.200482  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200663  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200824  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.285244  152550 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:32.317027  152550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:32.466233  152550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:32.472677  152550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:32.472768  152550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:32.490080  152550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:32.490111  152550 start.go:495] detecting cgroup driver to use...
	I0826 12:09:32.490189  152550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:32.509031  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:32.524361  152550 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:32.524417  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:32.539259  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:32.553276  152550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:32.676018  152550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:32.833702  152550 docker.go:233] disabling docker service ...
	I0826 12:09:32.833779  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:32.851253  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:32.865578  152550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:33.000922  152550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:33.129916  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:33.144209  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:33.162946  152550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:09:33.163010  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.174271  152550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:33.174360  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.189085  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.204388  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.218151  152550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:33.234931  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.257016  152550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.280905  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.293033  152550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:33.303161  152550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:33.303235  152550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:33.316560  152550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:33.326319  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:33.449279  152550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:33.587642  152550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:33.587722  152550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:33.592423  152550 start.go:563] Will wait 60s for crictl version
	I0826 12:09:33.592495  152550 ssh_runner.go:195] Run: which crictl
	I0826 12:09:33.596628  152550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:33.633109  152550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:33.633225  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.661128  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.692222  152550 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:09:32.216396  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .Start
	I0826 12:09:32.216630  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring networks are active...
	I0826 12:09:32.217414  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network default is active
	I0826 12:09:32.217851  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network mk-old-k8s-version-839656 is active
	I0826 12:09:32.218286  152982 main.go:141] libmachine: (old-k8s-version-839656) Getting domain xml...
	I0826 12:09:32.219128  152982 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 12:09:33.500501  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting to get IP...
	I0826 12:09:33.501678  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.502100  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.502202  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.502072  154009 retry.go:31] will retry after 193.282008ms: waiting for machine to come up
	I0826 12:09:33.697223  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.697688  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.697760  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.697669  154009 retry.go:31] will retry after 252.110347ms: waiting for machine to come up
	I0826 12:09:33.951330  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.952639  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.952677  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.952616  154009 retry.go:31] will retry after 436.954293ms: waiting for machine to come up
	I0826 12:09:34.391109  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.391724  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.391759  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.391676  154009 retry.go:31] will retry after 402.13367ms: waiting for machine to come up
	I0826 12:09:34.795471  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.796036  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.796060  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.795991  154009 retry.go:31] will retry after 738.867168ms: waiting for machine to come up
	I0826 12:09:35.537041  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:35.537518  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:35.537539  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:35.537476  154009 retry.go:31] will retry after 884.001928ms: waiting for machine to come up
	I0826 12:09:36.423984  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:36.424400  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:36.424432  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:36.424336  154009 retry.go:31] will retry after 958.887984ms: waiting for machine to come up
	I0826 12:09:33.693650  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:33.696950  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:33.697385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697661  152550 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:33.701975  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:33.715404  152550 kubeadm.go:883] updating cluster {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:33.715541  152550 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:09:33.715646  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:33.756477  152550 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:09:33.756546  152550 ssh_runner.go:195] Run: which lz4
	I0826 12:09:33.761027  152550 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:33.765139  152550 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:33.765181  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:09:35.106552  152550 crio.go:462] duration metric: took 1.345552742s to copy over tarball
	I0826 12:09:35.106656  152550 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:37.299491  152550 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.192805053s)
	I0826 12:09:37.299548  152550 crio.go:469] duration metric: took 2.192938832s to extract the tarball
	I0826 12:09:37.299560  152550 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:37.337654  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:37.378117  152550 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:09:37.378144  152550 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:09:37.378155  152550 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0826 12:09:37.378276  152550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-923586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:37.378375  152550 ssh_runner.go:195] Run: crio config
	I0826 12:09:37.438148  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:37.438182  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:37.438200  152550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:37.438229  152550 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-923586 NodeName:embed-certs-923586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:09:37.438436  152550 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-923586"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:37.438525  152550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:09:37.451742  152550 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:37.451824  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:37.463078  152550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0826 12:09:37.481563  152550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:37.499615  152550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0826 12:09:37.518753  152550 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:37.523612  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:37.535774  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:37.664131  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:37.681227  152550 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586 for IP: 192.168.39.6
	I0826 12:09:37.681254  152550 certs.go:194] generating shared ca certs ...
	I0826 12:09:37.681293  152550 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:37.681467  152550 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:37.681529  152550 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:37.681542  152550 certs.go:256] generating profile certs ...
	I0826 12:09:37.681665  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/client.key
	I0826 12:09:37.681751  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key.f0cd25f6
	I0826 12:09:37.681813  152550 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key
	I0826 12:09:37.681967  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:37.682018  152550 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:37.682029  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:37.682064  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:37.682100  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:37.682136  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:37.682199  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:37.683214  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:37.721802  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:37.756110  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:09:37.786038  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:09:37.818026  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0826 12:09:37.385261  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:37.385737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:37.385767  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:37.385679  154009 retry.go:31] will retry after 991.322442ms: waiting for machine to come up
	I0826 12:09:38.379002  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:38.379428  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:38.379457  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:38.379382  154009 retry.go:31] will retry after 1.199531339s: waiting for machine to come up
	I0826 12:09:39.581068  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:39.581551  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:39.581581  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:39.581506  154009 retry.go:31] will retry after 1.74680502s: waiting for machine to come up
	I0826 12:09:41.330775  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:41.331224  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:41.331254  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:41.331170  154009 retry.go:31] will retry after 2.648889988s: waiting for machine to come up
	I0826 12:09:37.843982  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:09:37.869902  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:09:37.893757  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:09:37.917320  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:09:37.940492  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:09:37.964211  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:09:37.987907  152550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:09:38.004414  152550 ssh_runner.go:195] Run: openssl version
	I0826 12:09:38.010144  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:09:38.020820  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025245  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025324  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.031174  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:09:38.041847  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:09:38.052764  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057501  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057591  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.063840  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:09:38.075173  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:09:38.085770  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089921  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089986  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.095373  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:09:38.105709  152550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:09:38.110189  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:09:38.115952  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:09:38.121463  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:09:38.127423  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:09:38.132968  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:09:38.138735  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:09:38.144517  152550 kubeadm.go:392] StartCluster: {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:09:38.144671  152550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:09:38.144748  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.179325  152550 cri.go:89] found id: ""
	I0826 12:09:38.179409  152550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:09:38.189261  152550 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:09:38.189296  152550 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:09:38.189368  152550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:09:38.198923  152550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:09:38.200065  152550 kubeconfig.go:125] found "embed-certs-923586" server: "https://192.168.39.6:8443"
	I0826 12:09:38.202145  152550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:09:38.211371  152550 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.6
	I0826 12:09:38.211415  152550 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:09:38.211431  152550 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:09:38.211501  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.245861  152550 cri.go:89] found id: ""
	I0826 12:09:38.245945  152550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:09:38.262469  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:09:38.272693  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:09:38.272721  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:09:38.272780  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:09:38.281704  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:09:38.281779  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:09:38.291042  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:09:38.299990  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:09:38.300057  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:09:38.309982  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.319474  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:09:38.319536  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.329345  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:09:38.338548  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:09:38.338649  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:09:38.349124  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:09:38.359112  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:38.470240  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.758142  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.28785788s)
	I0826 12:09:39.758180  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.973482  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.044459  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.143679  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:09:40.143844  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:40.644217  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.144357  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.160970  152550 api_server.go:72] duration metric: took 1.017300298s to wait for apiserver process to appear ...
	I0826 12:09:41.161005  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:09:41.161032  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.548928  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.548971  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.548988  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.580924  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.580991  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.661191  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.667248  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:43.667278  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.161959  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.177173  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.177216  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.661798  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.668406  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.668456  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:45.162005  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:45.168111  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:09:45.174487  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:09:45.174525  152550 api_server.go:131] duration metric: took 4.013513808s to wait for apiserver health ...
	I0826 12:09:45.174536  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:45.174543  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:45.176809  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:09:43.982234  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:43.982681  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:43.982714  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:43.982593  154009 retry.go:31] will retry after 2.916473093s: waiting for machine to come up
	I0826 12:09:45.178235  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:09:45.189704  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:09:45.250046  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:09:45.262420  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:09:45.262460  152550 system_pods.go:61] "coredns-6f6b679f8f-h4wmk" [39b276c0-68ef-4dc9-9f73-ee79c2c14625] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262467  152550 system_pods.go:61] "coredns-6f6b679f8f-l5z8f" [7e0082cc-2364-499c-bdb8-5f2ee7ee5fa7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262473  152550 system_pods.go:61] "etcd-embed-certs-923586" [06d68f69-a99f-4b34-87c7-e2fb80cdd886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:09:45.262481  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [2d0952e2-f5d9-49e8-b957-00f92dbbc436] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:09:45.262490  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [2e632e39-6249-40e3-82ab-74e820a84f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:09:45.262495  152550 system_pods.go:61] "kube-proxy-wfl6s" [9f690d4f-11ee-4e67-aa8a-2c3e304d699d] Running
	I0826 12:09:45.262500  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [47d66689-0a4c-4811-b4f0-2481034f1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:09:45.262505  152550 system_pods.go:61] "metrics-server-6867b74b74-cw5t8" [1bced435-db48-46d6-9c76-fb13050a7851] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:09:45.262510  152550 system_pods.go:61] "storage-provisioner" [259f7851-96da-42c3-aae3-35d13ec21573] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:09:45.262522  152550 system_pods.go:74] duration metric: took 12.449002ms to wait for pod list to return data ...
	I0826 12:09:45.262531  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:09:45.276323  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:09:45.276359  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:09:45.276372  152550 node_conditions.go:105] duration metric: took 13.836307ms to run NodePressure ...
	I0826 12:09:45.276389  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:45.558970  152550 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563147  152550 kubeadm.go:739] kubelet initialised
	I0826 12:09:45.563168  152550 kubeadm.go:740] duration metric: took 4.16477ms waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563176  152550 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:09:45.574933  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.581504  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581530  152550 pod_ready.go:82] duration metric: took 6.568456ms for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.581548  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581557  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.587904  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587935  152550 pod_ready.go:82] duration metric: took 6.368664ms for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.587945  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587956  152550 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.592416  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592440  152550 pod_ready.go:82] duration metric: took 4.475923ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.592448  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592453  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.654230  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654265  152550 pod_ready.go:82] duration metric: took 61.80344ms for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.654275  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654282  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:47.659899  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:46.902687  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:46.903209  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:46.903243  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:46.903150  154009 retry.go:31] will retry after 4.06528556s: waiting for machine to come up
	I0826 12:09:50.972745  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973257  152982 main.go:141] libmachine: (old-k8s-version-839656) Found IP for machine: 192.168.72.136
	I0826 12:09:50.973280  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserving static IP address...
	I0826 12:09:50.973297  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has current primary IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.973653  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | skip adding static IP to network mk-old-k8s-version-839656 - found existing host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"}
	I0826 12:09:50.973672  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserved static IP address: 192.168.72.136
	I0826 12:09:50.973693  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting for SSH to be available...
	I0826 12:09:50.973737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Getting to WaitForSSH function...
	I0826 12:09:50.976028  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976406  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.976438  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976544  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH client type: external
	I0826 12:09:50.976598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa (-rw-------)
	I0826 12:09:50.976622  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:50.976632  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | About to run SSH command:
	I0826 12:09:50.976642  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | exit 0
	I0826 12:09:51.107476  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:51.107964  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 12:09:51.108748  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.111740  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112251  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.112281  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112613  152982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 12:09:51.112820  152982 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:51.112842  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.113094  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.115616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116011  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.116042  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116213  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.116382  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116483  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116618  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.116815  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.117105  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.117120  152982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:51.219189  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:51.219220  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219528  152982 buildroot.go:166] provisioning hostname "old-k8s-version-839656"
	I0826 12:09:51.219558  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219798  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.222773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223300  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.223337  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223511  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.223750  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.223975  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.224156  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.224364  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.224610  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.224625  152982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-839656 && echo "old-k8s-version-839656" | sudo tee /etc/hostname
	I0826 12:09:51.340951  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-839656
	
	I0826 12:09:51.340995  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.343773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344119  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.344144  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344312  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.344531  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344731  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344865  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.345037  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.345207  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.345224  152982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-839656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-839656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-839656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:51.456135  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:51.456180  152982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:51.456233  152982 buildroot.go:174] setting up certificates
	I0826 12:09:51.456247  152982 provision.go:84] configureAuth start
	I0826 12:09:51.456263  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.456585  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.459426  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.459852  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.459895  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.460083  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.462404  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462754  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.462788  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462984  152982 provision.go:143] copyHostCerts
	I0826 12:09:51.463042  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:51.463061  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:51.463118  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:51.463225  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:51.463235  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:51.463255  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:51.463306  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:51.463313  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:51.463331  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:51.463381  152982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-839656 san=[127.0.0.1 192.168.72.136 localhost minikube old-k8s-version-839656]
	I0826 12:09:51.533462  152982 provision.go:177] copyRemoteCerts
	I0826 12:09:51.533528  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:51.533556  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.536586  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.536967  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.536991  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.537268  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.537519  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.537729  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.537894  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:51.617503  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:51.642966  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0826 12:09:51.669120  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:51.693595  152982 provision.go:87] duration metric: took 237.331736ms to configureAuth
	I0826 12:09:51.693629  152982 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:51.693808  152982 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:09:51.693895  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.697161  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697508  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.697553  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697789  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.698042  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698207  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698394  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.698565  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.698798  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.698819  152982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:52.187972  153366 start.go:364] duration metric: took 2m56.271360342s to acquireMachinesLock for "default-k8s-diff-port-697869"
	I0826 12:09:52.188045  153366 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:52.188053  153366 fix.go:54] fixHost starting: 
	I0826 12:09:52.188497  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:52.188541  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:52.209451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0826 12:09:52.209960  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:52.210572  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:09:52.210591  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:52.211008  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:52.211232  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:09:52.211382  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:09:52.213165  153366 fix.go:112] recreateIfNeeded on default-k8s-diff-port-697869: state=Stopped err=<nil>
	I0826 12:09:52.213198  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	W0826 12:09:52.213359  153366 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:52.215535  153366 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-697869" ...
	I0826 12:09:49.662002  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.663287  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.959544  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:51.959580  152982 machine.go:96] duration metric: took 846.74482ms to provisionDockerMachine
	I0826 12:09:51.959595  152982 start.go:293] postStartSetup for "old-k8s-version-839656" (driver="kvm2")
	I0826 12:09:51.959606  152982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:51.959628  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.959989  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:51.960024  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.962912  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963278  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.963304  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963520  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.963756  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.963954  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.964082  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.046059  152982 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:52.050013  152982 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:52.050045  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:52.050119  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:52.050225  152982 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:52.050345  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:52.059871  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:52.082494  152982 start.go:296] duration metric: took 122.880191ms for postStartSetup
	I0826 12:09:52.082546  152982 fix.go:56] duration metric: took 19.890844987s for fixHost
	I0826 12:09:52.082576  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.085291  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085726  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.085772  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085898  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.086116  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086307  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086457  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.086659  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:52.086841  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:52.086856  152982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:52.187806  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674192.159623045
	
	I0826 12:09:52.187839  152982 fix.go:216] guest clock: 1724674192.159623045
	I0826 12:09:52.187846  152982 fix.go:229] Guest: 2024-08-26 12:09:52.159623045 +0000 UTC Remote: 2024-08-26 12:09:52.082552402 +0000 UTC m=+250.413281630 (delta=77.070643ms)
	I0826 12:09:52.187868  152982 fix.go:200] guest clock delta is within tolerance: 77.070643ms
	I0826 12:09:52.187873  152982 start.go:83] releasing machines lock for "old-k8s-version-839656", held for 19.996211523s
	I0826 12:09:52.187905  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.188210  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:52.191003  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191480  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.191511  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191670  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192375  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192595  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192733  152982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:52.192794  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.192854  152982 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:52.192883  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.195598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195757  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195965  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.195994  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196172  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196256  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.196290  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196424  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196463  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196624  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196627  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196812  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196842  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.196954  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.304741  152982 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:52.311072  152982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:52.457568  152982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:52.465123  152982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:52.465211  152982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:52.487320  152982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:52.487351  152982 start.go:495] detecting cgroup driver to use...
	I0826 12:09:52.487459  152982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:52.509680  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:52.526517  152982 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:52.526615  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:52.540741  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:52.554819  152982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:52.677611  152982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:52.829605  152982 docker.go:233] disabling docker service ...
	I0826 12:09:52.829706  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:52.844862  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:52.859869  152982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:53.021968  152982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:53.156768  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:53.173028  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:53.194573  152982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 12:09:53.194641  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.204783  152982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:53.204873  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.215395  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.225578  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.235810  152982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:53.246635  152982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:53.257399  152982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:53.257467  152982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:53.273553  152982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:53.283339  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:53.432394  152982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:53.583340  152982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:53.583443  152982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:53.590729  152982 start.go:563] Will wait 60s for crictl version
	I0826 12:09:53.590877  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:53.596292  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:53.656413  152982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:53.656523  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.685569  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.716571  152982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 12:09:52.217358  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Start
	I0826 12:09:52.217561  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring networks are active...
	I0826 12:09:52.218523  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network default is active
	I0826 12:09:52.218930  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network mk-default-k8s-diff-port-697869 is active
	I0826 12:09:52.219443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Getting domain xml...
	I0826 12:09:52.220240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Creating domain...
	I0826 12:09:53.637205  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting to get IP...
	I0826 12:09:53.638259  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638719  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638757  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.638648  154153 retry.go:31] will retry after 309.073725ms: waiting for machine to come up
	I0826 12:09:53.949323  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.949986  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.950021  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.949941  154153 retry.go:31] will retry after 389.554302ms: waiting for machine to come up
	I0826 12:09:54.341836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342416  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.342359  154153 retry.go:31] will retry after 314.065385ms: waiting for machine to come up
	I0826 12:09:54.657763  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658394  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658425  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.658336  154153 retry.go:31] will retry after 564.24487ms: waiting for machine to come up
	I0826 12:09:55.224230  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224610  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.224578  154153 retry.go:31] will retry after 685.123739ms: waiting for machine to come up
	I0826 12:09:53.718104  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:53.721461  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.721900  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:53.721938  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.722137  152982 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:53.726404  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:53.738999  152982 kubeadm.go:883] updating cluster {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:53.739130  152982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 12:09:53.739182  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:53.791456  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:53.791561  152982 ssh_runner.go:195] Run: which lz4
	I0826 12:09:53.795624  152982 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:53.799857  152982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:53.799892  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 12:09:55.402637  152982 crio.go:462] duration metric: took 1.607044522s to copy over tarball
	I0826 12:09:55.402746  152982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:54.163063  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.662394  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.662428  152550 pod_ready.go:82] duration metric: took 10.008136426s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.662445  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668522  152550 pod_ready.go:93] pod "kube-proxy-wfl6s" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.668557  152550 pod_ready.go:82] duration metric: took 6.10318ms for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668571  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:57.675036  152550 pod_ready.go:103] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.911914  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912484  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.912420  154153 retry.go:31] will retry after 578.675355ms: waiting for machine to come up
	I0826 12:09:56.493183  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493668  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:56.493552  154153 retry.go:31] will retry after 793.710444ms: waiting for machine to come up
	I0826 12:09:57.289554  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290128  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290160  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:57.290070  154153 retry.go:31] will retry after 1.099676217s: waiting for machine to come up
	I0826 12:09:58.391500  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392029  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392060  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:58.391966  154153 retry.go:31] will retry after 1.753296062s: waiting for machine to come up
	I0826 12:10:00.148179  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148759  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148795  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:00.148689  154153 retry.go:31] will retry after 1.591840738s: waiting for machine to come up
	I0826 12:09:58.462705  152982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059925579s)
	I0826 12:09:58.462738  152982 crio.go:469] duration metric: took 3.060066141s to extract the tarball
	I0826 12:09:58.462748  152982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:58.504763  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:58.547876  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:58.547908  152982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:09:58.548002  152982 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.548020  152982 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.548047  152982 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.548058  152982 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.548025  152982 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.548107  152982 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.548041  152982 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 12:09:58.548004  152982 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550035  152982 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.550050  152982 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.550064  152982 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.550039  152982 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 12:09:58.550090  152982 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550045  152982 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.550125  152982 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.550071  152982 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.785285  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.798866  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 12:09:58.801333  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.803488  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.845454  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.845683  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.851257  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.875512  152982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 12:09:58.875632  152982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.875702  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.899151  152982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 12:09:58.899204  152982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 12:09:58.899268  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.947547  152982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 12:09:58.947602  152982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.947657  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.960126  152982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 12:09:58.960178  152982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.960229  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.978450  152982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 12:09:58.978504  152982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.978571  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.981296  152982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 12:09:58.981335  152982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.981378  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990296  152982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 12:09:58.990341  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.990351  152982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.990398  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990481  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:58.990549  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.990624  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.993238  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.993297  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.117393  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.117394  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.137340  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.137381  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.137396  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.139282  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.140553  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.237314  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.242110  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.293209  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.293288  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.310442  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.316239  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.316345  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.382180  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.382851  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:59.389447  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 12:09:59.454424  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 12:09:59.484709  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 12:09:59.491496  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 12:09:59.491517  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 12:09:59.491555  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 12:09:59.495411  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 12:09:59.614016  152982 cache_images.go:92] duration metric: took 1.066082637s to LoadCachedImages
	W0826 12:09:59.614118  152982 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0826 12:09:59.614133  152982 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.20.0 crio true true} ...
	I0826 12:09:59.614248  152982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-839656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:59.614345  152982 ssh_runner.go:195] Run: crio config
	I0826 12:09:59.661938  152982 cni.go:84] Creating CNI manager for ""
	I0826 12:09:59.661962  152982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:59.661975  152982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:59.661994  152982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-839656 NodeName:old-k8s-version-839656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 12:09:59.662131  152982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-839656"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:59.662212  152982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 12:09:59.672820  152982 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:59.672907  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:59.682949  152982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0826 12:09:59.701705  152982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:59.719839  152982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0826 12:09:59.737712  152982 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:59.741301  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:59.753857  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:59.877969  152982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:59.896278  152982 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656 for IP: 192.168.72.136
	I0826 12:09:59.896306  152982 certs.go:194] generating shared ca certs ...
	I0826 12:09:59.896337  152982 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:59.896522  152982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:59.896620  152982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:59.896640  152982 certs.go:256] generating profile certs ...
	I0826 12:09:59.896769  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key
	I0826 12:09:59.896903  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261
	I0826 12:09:59.896972  152982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key
	I0826 12:09:59.897126  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:59.897165  152982 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:59.897178  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:59.897216  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:59.897261  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:59.897303  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:59.897362  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:59.898051  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:59.938407  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:59.983455  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:00.021803  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:00.058157  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 12:10:00.095920  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:00.133185  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:00.167537  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:00.193940  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:00.220558  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:00.245567  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:00.274758  152982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:00.296741  152982 ssh_runner.go:195] Run: openssl version
	I0826 12:10:00.305185  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:00.321395  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326339  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326422  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.332789  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:00.343971  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:00.355979  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360900  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360985  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.367085  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:00.379942  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:00.391907  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396769  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396845  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.403009  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:00.416262  152982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:00.422105  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:00.428526  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:00.435241  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:00.441902  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:00.448502  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:00.455012  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:00.461390  152982 kubeadm.go:392] StartCluster: {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:00.461533  152982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:00.461596  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.503939  152982 cri.go:89] found id: ""
	I0826 12:10:00.504026  152982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:00.515410  152982 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:00.515434  152982 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:00.515483  152982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:00.527240  152982 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:00.528558  152982 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:10:00.529540  152982 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-839656" cluster setting kubeconfig missing "old-k8s-version-839656" context setting]
	I0826 12:10:00.530977  152982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:00.618477  152982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:00.630233  152982 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
	I0826 12:10:00.630283  152982 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:00.630300  152982 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:00.630367  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.667438  152982 cri.go:89] found id: ""
	I0826 12:10:00.667535  152982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:00.685319  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:00.695968  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:00.696003  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:00.696087  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:00.706519  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:00.706583  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:00.716807  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:00.726555  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:00.726637  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:00.737356  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.747702  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:00.747773  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.758771  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:00.769257  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:00.769345  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:00.780102  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:00.791976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:00.922432  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:58.196998  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:58.197024  152550 pod_ready.go:82] duration metric: took 2.528445128s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:58.197035  152550 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:00.486854  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:02.704500  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:01.741774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742399  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:01.742299  154153 retry.go:31] will retry after 2.754846919s: waiting for machine to come up
	I0826 12:10:04.499575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499918  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499950  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:04.499866  154153 retry.go:31] will retry after 2.260097113s: waiting for machine to come up
	I0826 12:10:02.146027  152982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223548629s)
	I0826 12:10:02.146087  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.407469  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.511616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.629123  152982 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:02.629250  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.129448  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.629685  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.129759  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.629807  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.129526  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.629782  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.129949  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.630031  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.203846  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:07.703046  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:06.761311  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761805  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:06.761731  154153 retry.go:31] will retry after 3.424580644s: waiting for machine to come up
	I0826 12:10:10.188178  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188746  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has current primary IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Found IP for machine: 192.168.61.11
	I0826 12:10:10.188789  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserving static IP address...
	I0826 12:10:10.189233  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.189270  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | skip adding static IP to network mk-default-k8s-diff-port-697869 - found existing host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"}
	I0826 12:10:10.189292  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserved static IP address: 192.168.61.11
	I0826 12:10:10.189312  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for SSH to be available...
	I0826 12:10:10.189327  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Getting to WaitForSSH function...
	I0826 12:10:10.191775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192162  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.192192  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192272  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH client type: external
	I0826 12:10:10.192300  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa (-rw-------)
	I0826 12:10:10.192332  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:10.192351  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | About to run SSH command:
	I0826 12:10:10.192364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | exit 0
	I0826 12:10:10.315078  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:10.315506  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetConfigRaw
	I0826 12:10:10.316191  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.318850  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319207  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.319235  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319491  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:10:10.319715  153366 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:10.319736  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:10.320045  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.322352  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322660  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.322682  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322852  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.323067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323216  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323371  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.323524  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.323732  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.323745  153366 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:10.427284  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:10.427314  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427630  153366 buildroot.go:166] provisioning hostname "default-k8s-diff-port-697869"
	I0826 12:10:10.427661  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.430485  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.430865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.430894  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.431065  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.431240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431388  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431507  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.431631  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.431804  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.431818  153366 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-697869 && echo "default-k8s-diff-port-697869" | sudo tee /etc/hostname
	I0826 12:10:10.544414  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-697869
	
	I0826 12:10:10.544455  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.547901  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548333  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.548375  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548612  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.548835  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549074  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549250  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.549458  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.549632  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.549648  153366 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-697869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-697869/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-697869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:10.659809  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:10.659858  153366 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:10.659937  153366 buildroot.go:174] setting up certificates
	I0826 12:10:10.659957  153366 provision.go:84] configureAuth start
	I0826 12:10:10.659978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.660304  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.663231  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.663628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663798  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.666261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666603  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.666630  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666827  153366 provision.go:143] copyHostCerts
	I0826 12:10:10.666912  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:10.666937  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:10.667005  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:10.667125  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:10.667137  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:10.667164  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:10.667239  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:10.667249  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:10.667273  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:10.667344  153366 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-697869 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-697869 localhost minikube]
	I0826 12:10:11.491531  152463 start.go:364] duration metric: took 54.190046907s to acquireMachinesLock for "no-preload-956479"
	I0826 12:10:11.491592  152463 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:10:11.491601  152463 fix.go:54] fixHost starting: 
	I0826 12:10:11.492032  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:10:11.492066  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:10:11.509260  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
	I0826 12:10:11.509870  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:10:11.510401  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:10:11.510433  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:10:11.510772  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:10:11.510983  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:11.511151  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:10:11.513024  152463 fix.go:112] recreateIfNeeded on no-preload-956479: state=Stopped err=<nil>
	I0826 12:10:11.513048  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	W0826 12:10:11.513218  152463 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:10:11.515241  152463 out.go:177] * Restarting existing kvm2 VM for "no-preload-956479" ...
	I0826 12:10:07.129729  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:07.629445  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.129308  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.629701  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.130082  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.629958  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.129963  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.629747  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.130061  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.630060  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.703400  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:11.703487  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:10.808804  153366 provision.go:177] copyRemoteCerts
	I0826 12:10:10.808865  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:10.808893  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.811758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812215  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.812251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812451  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.812664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.812817  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.813020  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:10.905741  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:10.931863  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0826 12:10:10.958232  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:10.983737  153366 provision.go:87] duration metric: took 323.761817ms to configureAuth
	I0826 12:10:10.983774  153366 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:10.983992  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:10.984092  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.986976  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987357  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.987386  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.987842  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.987978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.988105  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.988276  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.988443  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.988459  153366 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:11.257812  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:11.257846  153366 machine.go:96] duration metric: took 938.116965ms to provisionDockerMachine
	I0826 12:10:11.257861  153366 start.go:293] postStartSetup for "default-k8s-diff-port-697869" (driver="kvm2")
	I0826 12:10:11.257872  153366 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:11.257889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.258214  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:11.258246  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.261404  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261680  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.261702  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261886  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.262067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.262214  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.262386  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.345667  153366 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:11.349967  153366 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:11.350004  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:11.350084  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:11.350186  153366 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:11.350308  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:11.361671  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:11.386178  153366 start.go:296] duration metric: took 128.298803ms for postStartSetup
	I0826 12:10:11.386233  153366 fix.go:56] duration metric: took 19.198180603s for fixHost
	I0826 12:10:11.386258  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.389263  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389579  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.389606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389838  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.390034  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390172  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390287  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.390479  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:11.390666  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:11.390678  153366 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:11.491363  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674211.462689704
	
	I0826 12:10:11.491389  153366 fix.go:216] guest clock: 1724674211.462689704
	I0826 12:10:11.491401  153366 fix.go:229] Guest: 2024-08-26 12:10:11.462689704 +0000 UTC Remote: 2024-08-26 12:10:11.386238136 +0000 UTC m=+195.618286719 (delta=76.451568ms)
	I0826 12:10:11.491428  153366 fix.go:200] guest clock delta is within tolerance: 76.451568ms
	I0826 12:10:11.491433  153366 start.go:83] releasing machines lock for "default-k8s-diff-port-697869", held for 19.303413047s
	I0826 12:10:11.491459  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.491760  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:11.494596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495094  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.495124  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495330  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.495889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496208  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496333  153366 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:11.496390  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.496433  153366 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:11.496456  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.499087  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499442  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499469  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499705  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499728  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499751  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.499964  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500007  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.500134  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500164  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500359  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500349  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.500509  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.612518  153366 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:11.618693  153366 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:11.766025  153366 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:11.772405  153366 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:11.772476  153366 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:11.790401  153366 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:11.790433  153366 start.go:495] detecting cgroup driver to use...
	I0826 12:10:11.790505  153366 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:11.806946  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:11.822137  153366 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:11.822199  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:11.836496  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:11.851090  153366 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:11.963366  153366 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:12.113326  153366 docker.go:233] disabling docker service ...
	I0826 12:10:12.113402  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:12.131489  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:12.148801  153366 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:12.293074  153366 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:12.420202  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:12.435061  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:12.455192  153366 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:12.455268  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.467004  153366 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:12.467079  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.477903  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.488979  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.500322  153366 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:12.513490  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.525746  153366 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.544944  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.556159  153366 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:12.566333  153366 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:12.566420  153366 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:12.584702  153366 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:12.596221  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:12.740368  153366 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:12.882412  153366 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:12.882501  153366 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:12.888373  153366 start.go:563] Will wait 60s for crictl version
	I0826 12:10:12.888446  153366 ssh_runner.go:195] Run: which crictl
	I0826 12:10:12.892415  153366 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:12.930486  153366 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:12.930577  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.959322  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.997340  153366 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:11.516801  152463 main.go:141] libmachine: (no-preload-956479) Calling .Start
	I0826 12:10:11.517026  152463 main.go:141] libmachine: (no-preload-956479) Ensuring networks are active...
	I0826 12:10:11.517932  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network default is active
	I0826 12:10:11.518378  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network mk-no-preload-956479 is active
	I0826 12:10:11.518950  152463 main.go:141] libmachine: (no-preload-956479) Getting domain xml...
	I0826 12:10:11.519889  152463 main.go:141] libmachine: (no-preload-956479) Creating domain...
	I0826 12:10:12.859267  152463 main.go:141] libmachine: (no-preload-956479) Waiting to get IP...
	I0826 12:10:12.860407  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:12.860889  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:12.860933  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:12.860840  154342 retry.go:31] will retry after 295.429691ms: waiting for machine to come up
	I0826 12:10:13.158650  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.159259  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.159290  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.159207  154342 retry.go:31] will retry after 385.646499ms: waiting for machine to come up
	I0826 12:10:13.547162  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.547722  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.547754  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.547631  154342 retry.go:31] will retry after 390.965905ms: waiting for machine to come up
	I0826 12:10:13.940240  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.940777  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.940820  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.940714  154342 retry.go:31] will retry after 457.995779ms: waiting for machine to come up
	I0826 12:10:14.400465  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:14.400981  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:14.401016  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:14.400917  154342 retry.go:31] will retry after 697.078299ms: waiting for machine to come up
	I0826 12:10:12.998786  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:13.001919  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002340  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:13.002376  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002627  153366 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:13.007888  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:13.023470  153366 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:13.023599  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:13.023666  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:13.060321  153366 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:13.060405  153366 ssh_runner.go:195] Run: which lz4
	I0826 12:10:13.064638  153366 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:10:13.069089  153366 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:10:13.069126  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:10:14.437617  153366 crio.go:462] duration metric: took 1.373030307s to copy over tarball
	I0826 12:10:14.437710  153366 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:10:12.129652  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:12.630076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.129342  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.630081  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.130129  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.629381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.129909  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.630114  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.129784  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.629463  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.704867  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:16.204819  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:15.099404  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:15.100002  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:15.100035  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:15.099956  154342 retry.go:31] will retry after 947.348263ms: waiting for machine to come up
	I0826 12:10:16.048628  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:16.049166  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:16.049185  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:16.049113  154342 retry.go:31] will retry after 1.169467339s: waiting for machine to come up
	I0826 12:10:17.219998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:17.220564  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:17.220589  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:17.220541  154342 retry.go:31] will retry after 945.873541ms: waiting for machine to come up
	I0826 12:10:18.167823  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:18.168351  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:18.168377  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:18.168272  154342 retry.go:31] will retry after 1.495556294s: waiting for machine to come up
	I0826 12:10:19.666032  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:19.666629  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:19.666656  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:19.666551  154342 retry.go:31] will retry after 1.710448725s: waiting for machine to come up
	I0826 12:10:16.739676  153366 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301910814s)
	I0826 12:10:16.739718  153366 crio.go:469] duration metric: took 2.302064986s to extract the tarball
	I0826 12:10:16.739729  153366 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:10:16.777127  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:16.820340  153366 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:10:16.820367  153366 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:10:16.820376  153366 kubeadm.go:934] updating node { 192.168.61.11 8444 v1.31.0 crio true true} ...
	I0826 12:10:16.820500  153366 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-697869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:16.820619  153366 ssh_runner.go:195] Run: crio config
	I0826 12:10:16.868670  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:16.868694  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:16.868708  153366 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:16.868738  153366 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-697869 NodeName:default-k8s-diff-port-697869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:16.868915  153366 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-697869"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:16.869010  153366 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:16.883092  153366 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:16.883230  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:16.893951  153366 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0826 12:10:16.911836  153366 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:16.928582  153366 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0826 12:10:16.945593  153366 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:16.949432  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:16.961659  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:17.085246  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:17.103244  153366 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869 for IP: 192.168.61.11
	I0826 12:10:17.103271  153366 certs.go:194] generating shared ca certs ...
	I0826 12:10:17.103302  153366 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:17.103510  153366 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:17.103575  153366 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:17.103585  153366 certs.go:256] generating profile certs ...
	I0826 12:10:17.103700  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/client.key
	I0826 12:10:17.103787  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key.bfd30dfa
	I0826 12:10:17.103839  153366 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key
	I0826 12:10:17.103989  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:17.104033  153366 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:17.104045  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:17.104088  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:17.104138  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:17.104169  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:17.104226  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:17.105131  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:17.133445  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:17.170369  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:17.203828  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:17.239736  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0826 12:10:17.270804  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:10:17.311143  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:17.337241  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:10:17.361255  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:17.389089  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:17.415203  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:17.440069  153366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:17.457711  153366 ssh_runner.go:195] Run: openssl version
	I0826 12:10:17.463825  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:17.475007  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479590  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479674  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.485682  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:17.496820  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:17.507770  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512284  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512360  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.518185  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:17.530028  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:17.541213  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546412  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546492  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.552969  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:17.565000  153366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:17.570123  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:17.576431  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:17.582447  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:17.588686  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:17.595338  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:17.601487  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:17.607923  153366 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:17.608035  153366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:17.608125  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.647040  153366 cri.go:89] found id: ""
	I0826 12:10:17.647140  153366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:17.657597  153366 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:17.657623  153366 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:17.657696  153366 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:17.667949  153366 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:17.669056  153366 kubeconfig.go:125] found "default-k8s-diff-port-697869" server: "https://192.168.61.11:8444"
	I0826 12:10:17.671281  153366 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:17.680798  153366 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I0826 12:10:17.680847  153366 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:17.680862  153366 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:17.680921  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.718772  153366 cri.go:89] found id: ""
	I0826 12:10:17.718890  153366 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:17.737115  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:17.747272  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:17.747300  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:17.747365  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:10:17.757172  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:17.757253  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:17.767325  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:10:17.779947  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:17.780022  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:17.789867  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.799532  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:17.799614  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.812714  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:10:17.825162  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:17.825246  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:17.838058  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:17.855348  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:17.976993  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:18.821196  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.025876  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.104571  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.198607  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:19.198729  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.698978  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.198987  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.246044  153366 api_server.go:72] duration metric: took 1.047451922s to wait for apiserver process to appear ...
	I0826 12:10:20.246077  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:20.246102  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:20.246682  153366 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0826 12:10:20.747149  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:17.129856  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:17.629845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.129411  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.629780  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.129980  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.629521  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.129719  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.630349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.130078  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.629658  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.704382  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:20.705290  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:22.705625  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:21.379594  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:21.380141  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:21.380174  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:21.380054  154342 retry.go:31] will retry after 2.588125482s: waiting for machine to come up
	I0826 12:10:23.969901  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:23.970463  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:23.970492  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:23.970429  154342 retry.go:31] will retry after 2.959609618s: waiting for machine to come up
	I0826 12:10:22.736733  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.736773  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.736792  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.767927  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.767978  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.767998  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.815605  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:22.815647  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.247226  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.265036  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.265070  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.746536  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.761050  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.761087  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.246584  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.256796  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.256832  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.746370  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.751618  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.751659  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.246161  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.250242  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.250271  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.746903  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.751494  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.751522  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:26.246579  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:26.251290  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:10:26.257484  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:26.257519  153366 api_server.go:131] duration metric: took 6.01143401s to wait for apiserver health ...
	I0826 12:10:26.257529  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:26.257536  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:26.259498  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:22.130431  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:22.630197  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.129672  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.630044  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.129562  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.629554  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.129334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.630351  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.130136  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.629461  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.203975  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.704731  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:26.932057  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:26.932632  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:26.932665  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:26.932547  154342 retry.go:31] will retry after 3.538498107s: waiting for machine to come up
	I0826 12:10:26.260852  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:26.271312  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:26.290104  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:26.299800  153366 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:26.299843  153366 system_pods.go:61] "coredns-6f6b679f8f-d5f9l" [7761358c-70cb-40e1-98c2-322335e33053] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:26.299852  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [877bd1a3-67e5-4208-96f7-242f6a6edd76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:26.299858  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [e2d33714-bff0-480b-9619-ed28f0fbbbe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:26.299868  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [f858c23a-d87e-4f1e-bffa-0bdd8ded996f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:26.299872  153366 system_pods.go:61] "kube-proxy-lvsx9" [12112756-81ed-415f-9033-cb9effdd20ee] Running
	I0826 12:10:26.299880  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [d8991013-f5ee-4df3-b48a-d6546417999a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:26.299885  153366 system_pods.go:61] "metrics-server-6867b74b74-spxx8" [1d5d9b1e-05f3-4b59-98a8-8d8f127be3c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:26.299889  153366 system_pods.go:61] "storage-provisioner" [ac2ac441-92f0-467a-a0da-fe4b8e4da50c] Running
	I0826 12:10:26.299896  153366 system_pods.go:74] duration metric: took 9.758032ms to wait for pod list to return data ...
	I0826 12:10:26.299903  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:26.303810  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:26.303848  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:26.303865  153366 node_conditions.go:105] duration metric: took 3.956287ms to run NodePressure ...
	I0826 12:10:26.303888  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:26.568053  153366 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573755  153366 kubeadm.go:739] kubelet initialised
	I0826 12:10:26.573793  153366 kubeadm.go:740] duration metric: took 5.692563ms waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573810  153366 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:26.580178  153366 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:28.585940  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.587027  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.129634  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:27.629356  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.130029  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.629993  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.130030  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.629424  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.129476  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.630209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.129435  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.630170  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.203373  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:32.204503  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.474603  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475145  152463 main.go:141] libmachine: (no-preload-956479) Found IP for machine: 192.168.50.213
	I0826 12:10:30.475172  152463 main.go:141] libmachine: (no-preload-956479) Reserving static IP address...
	I0826 12:10:30.475184  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has current primary IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475655  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.475688  152463 main.go:141] libmachine: (no-preload-956479) DBG | skip adding static IP to network mk-no-preload-956479 - found existing host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"}
	I0826 12:10:30.475705  152463 main.go:141] libmachine: (no-preload-956479) Reserved static IP address: 192.168.50.213
	I0826 12:10:30.475724  152463 main.go:141] libmachine: (no-preload-956479) Waiting for SSH to be available...
	I0826 12:10:30.475749  152463 main.go:141] libmachine: (no-preload-956479) DBG | Getting to WaitForSSH function...
	I0826 12:10:30.477762  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478222  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.478256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478323  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH client type: external
	I0826 12:10:30.478352  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa (-rw-------)
	I0826 12:10:30.478400  152463 main.go:141] libmachine: (no-preload-956479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:30.478423  152463 main.go:141] libmachine: (no-preload-956479) DBG | About to run SSH command:
	I0826 12:10:30.478431  152463 main.go:141] libmachine: (no-preload-956479) DBG | exit 0
	I0826 12:10:30.607143  152463 main.go:141] libmachine: (no-preload-956479) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:30.607526  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetConfigRaw
	I0826 12:10:30.608312  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.611028  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611425  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.611461  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611664  152463 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:10:30.611888  152463 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:30.611920  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:30.612166  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.614651  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615221  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.615253  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615430  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.615623  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615802  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615987  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.616182  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.616357  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.616367  152463 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:30.719178  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:30.719220  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719544  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:10:30.719577  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719829  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.722665  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723083  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.723112  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723299  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.723479  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723805  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.723965  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.724136  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.724154  152463 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956479 && echo "no-preload-956479" | sudo tee /etc/hostname
	I0826 12:10:30.844510  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956479
	
	I0826 12:10:30.844551  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.848147  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848601  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.848636  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848846  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.849053  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849234  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849371  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.849537  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.849711  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.849726  152463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956479/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:30.963743  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:30.963781  152463 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:30.963831  152463 buildroot.go:174] setting up certificates
	I0826 12:10:30.963844  152463 provision.go:84] configureAuth start
	I0826 12:10:30.963858  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.964223  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.967426  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.967922  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.967947  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.968210  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.970910  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971231  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.971268  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971381  152463 provision.go:143] copyHostCerts
	I0826 12:10:30.971439  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:30.971462  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:30.971515  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:30.971610  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:30.971620  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:30.971641  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:30.971695  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:30.971708  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:30.971726  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:30.971773  152463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.no-preload-956479 san=[127.0.0.1 192.168.50.213 localhost minikube no-preload-956479]
	I0826 12:10:31.209813  152463 provision.go:177] copyRemoteCerts
	I0826 12:10:31.209904  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:31.209939  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.213380  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.213880  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.213921  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.214161  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.214387  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.214543  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.214669  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.304972  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:31.332069  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:10:31.359526  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:31.387988  152463 provision.go:87] duration metric: took 424.128041ms to configureAuth
	I0826 12:10:31.388025  152463 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:31.388248  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:31.388342  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.392126  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392495  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.392527  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.393069  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393276  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393443  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.393636  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.393812  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.393830  152463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:31.673101  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:31.673134  152463 machine.go:96] duration metric: took 1.061231135s to provisionDockerMachine
	I0826 12:10:31.673147  152463 start.go:293] postStartSetup for "no-preload-956479" (driver="kvm2")
	I0826 12:10:31.673157  152463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:31.673173  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.673523  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:31.673556  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.676692  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677097  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.677142  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677349  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.677558  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.677702  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.677822  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.757940  152463 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:31.762636  152463 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:31.762668  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:31.762755  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:31.762887  152463 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:31.763005  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:31.773596  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:31.805712  152463 start.go:296] duration metric: took 132.547938ms for postStartSetup
	I0826 12:10:31.805772  152463 fix.go:56] duration metric: took 20.314170869s for fixHost
	I0826 12:10:31.805799  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.809143  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809503  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.809539  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.810034  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810552  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.810714  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.810950  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.810964  152463 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:31.919562  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674231.878777554
	
	I0826 12:10:31.919593  152463 fix.go:216] guest clock: 1724674231.878777554
	I0826 12:10:31.919605  152463 fix.go:229] Guest: 2024-08-26 12:10:31.878777554 +0000 UTC Remote: 2024-08-26 12:10:31.805776925 +0000 UTC m=+357.093278934 (delta=73.000629ms)
	I0826 12:10:31.919635  152463 fix.go:200] guest clock delta is within tolerance: 73.000629ms
	I0826 12:10:31.919653  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 20.428086051s
	I0826 12:10:31.919683  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.919994  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:31.922926  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923273  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.923305  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923492  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924019  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924217  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924314  152463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:31.924361  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.924462  152463 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:31.924485  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.927256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927510  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927697  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927724  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927869  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.927977  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.928076  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928245  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.928265  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928507  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.928547  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928816  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:32.013240  152463 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:32.047898  152463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:32.200554  152463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:32.207077  152463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:32.207149  152463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:32.223842  152463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:32.223869  152463 start.go:495] detecting cgroup driver to use...
	I0826 12:10:32.223931  152463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:32.241232  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:32.256522  152463 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:32.256594  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:32.271203  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:32.286062  152463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:32.422959  152463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:32.596450  152463 docker.go:233] disabling docker service ...
	I0826 12:10:32.596518  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:32.610684  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:32.624456  152463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:32.754300  152463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:32.880447  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:32.895761  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:32.915507  152463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:32.915579  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.926244  152463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:32.926323  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.936322  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.947292  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.958349  152463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:32.969332  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.981643  152463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.003757  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.014520  152463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:33.024134  152463 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:33.024220  152463 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:33.036667  152463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:33.046675  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:33.166681  152463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:33.314047  152463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:33.314136  152463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:33.319922  152463 start.go:563] Will wait 60s for crictl version
	I0826 12:10:33.320002  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.323747  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:33.363172  152463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:33.363268  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.391607  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.422180  152463 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:33.423515  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:33.426749  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427279  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:33.427316  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427559  152463 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:33.431826  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:33.443984  152463 kubeadm.go:883] updating cluster {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:33.444119  152463 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:33.444160  152463 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:33.478886  152463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:33.478919  152463 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:10:33.478977  152463 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.478997  152463 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.479029  152463 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.479079  152463 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 12:10:33.479002  152463 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.479095  152463 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.479153  152463 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.479157  152463 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480618  152463 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.480616  152463 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.480650  152463 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.480654  152463 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480623  152463 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.480628  152463 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.480629  152463 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.480763  152463 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0826 12:10:33.713473  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0826 12:10:33.725267  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.737490  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.787737  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.801836  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.807734  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.873480  152463 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0826 12:10:33.873546  152463 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.873617  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.873493  152463 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0826 12:10:33.873741  152463 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.873772  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.889641  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.921098  152463 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0826 12:10:33.921226  152463 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.921326  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.921170  152463 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0826 12:10:33.921463  152463 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.921499  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.930650  152463 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0826 12:10:33.930702  152463 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.930720  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.930738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.930743  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.973954  152463 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0826 12:10:33.974005  152463 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.974042  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.974059  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.974053  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:34.013541  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.013571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.013542  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.053966  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.053985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.068414  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.116750  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.116778  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.164943  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.172957  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.204571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.230985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.236650  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.270826  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0826 12:10:34.270990  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.304050  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0826 12:10:34.304147  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:34.308251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0826 12:10:34.308374  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:34.335314  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.348389  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.351251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0826 12:10:34.351376  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:34.359812  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0826 12:10:34.359842  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0826 12:10:34.359863  152463 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.359891  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0826 12:10:34.359921  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0826 12:10:34.359948  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:34.359952  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.400500  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0826 12:10:34.400644  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:34.428715  152463 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0826 12:10:34.428758  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0826 12:10:34.428776  152463 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.428802  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0826 12:10:34.428855  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:31.586509  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:31.586539  153366 pod_ready.go:82] duration metric: took 5.006322441s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:31.586549  153366 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:33.593060  153366 pod_ready.go:103] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:34.092728  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:34.092762  153366 pod_ready.go:82] duration metric: took 2.506204888s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:34.092775  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:32.130190  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:32.630331  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.129323  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.629368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.129667  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.629421  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.130330  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.630142  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.130340  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.629400  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.205203  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.704302  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.449383  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.089320181s)
	I0826 12:10:36.449436  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0826 12:10:36.449447  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.048765538s)
	I0826 12:10:36.449467  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449481  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0826 12:10:36.449509  152463 ssh_runner.go:235] Completed: which crictl: (2.020634497s)
	I0826 12:10:36.449536  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449568  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.427527  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.977941403s)
	I0826 12:10:38.427585  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0826 12:10:38.427610  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427529  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.977935335s)
	I0826 12:10:38.427668  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.466259  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:36.100135  153366 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.100269  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.100296  153366 pod_ready.go:82] duration metric: took 3.007513255s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.100308  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105634  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.105658  153366 pod_ready.go:82] duration metric: took 5.341415ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105668  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110911  153366 pod_ready.go:93] pod "kube-proxy-lvsx9" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.110939  153366 pod_ready.go:82] duration metric: took 5.263436ms for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110950  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115725  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.115752  153366 pod_ready.go:82] duration metric: took 4.79279ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115765  153366 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:39.122469  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.130309  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:37.629548  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.129413  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.629384  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.130354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.629474  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.129901  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.629362  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.129862  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.629811  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.704541  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.704598  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.705026  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.616557  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.188857601s)
	I0826 12:10:40.616588  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0826 12:10:40.616614  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616634  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.150337121s)
	I0826 12:10:40.616669  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616769  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0826 12:10:40.616885  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:42.472543  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.855842642s)
	I0826 12:10:42.472583  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0826 12:10:42.472586  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.855677168s)
	I0826 12:10:42.472620  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0826 12:10:42.472625  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:42.472702  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:44.419974  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.947236189s)
	I0826 12:10:44.420011  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0826 12:10:44.420041  152463 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:44.420097  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:41.122741  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:43.123416  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:45.623931  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.130334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:42.630068  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.130212  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.629443  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.130067  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.629805  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.129753  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.629806  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.129401  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.630125  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.203266  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.205125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:48.038017  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.617897174s)
	I0826 12:10:48.038048  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0826 12:10:48.038073  152463 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.038114  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.693199  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0826 12:10:48.693251  152463 cache_images.go:123] Successfully loaded all cached images
	I0826 12:10:48.693259  152463 cache_images.go:92] duration metric: took 15.214324574s to LoadCachedImages
	I0826 12:10:48.693274  152463 kubeadm.go:934] updating node { 192.168.50.213 8443 v1.31.0 crio true true} ...
	I0826 12:10:48.693392  152463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:48.693481  152463 ssh_runner.go:195] Run: crio config
	I0826 12:10:48.748151  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:48.748176  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:48.748185  152463 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:48.748210  152463 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956479 NodeName:no-preload-956479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:48.748387  152463 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956479"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:48.748458  152463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:48.759020  152463 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:48.759097  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:48.768345  152463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0826 12:10:48.784233  152463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:48.800236  152463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0826 12:10:48.819243  152463 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:48.823154  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:48.835973  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:48.959506  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:48.977413  152463 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479 for IP: 192.168.50.213
	I0826 12:10:48.977437  152463 certs.go:194] generating shared ca certs ...
	I0826 12:10:48.977458  152463 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:48.977653  152463 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:48.977714  152463 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:48.977725  152463 certs.go:256] generating profile certs ...
	I0826 12:10:48.977827  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.key
	I0826 12:10:48.977903  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key.5be91d7c
	I0826 12:10:48.977952  152463 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key
	I0826 12:10:48.978094  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:48.978136  152463 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:48.978149  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:48.978183  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:48.978221  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:48.978252  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:48.978305  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:48.978975  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:49.029725  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:49.077908  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:49.112813  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:49.157768  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 12:10:49.201804  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:49.228271  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:49.256770  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:49.283073  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:49.316360  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:49.342284  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:49.368126  152463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:49.386334  152463 ssh_runner.go:195] Run: openssl version
	I0826 12:10:49.392457  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:49.404815  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410087  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410160  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.416900  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:49.429893  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:49.442796  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448216  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448291  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.454416  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:49.466241  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:49.477636  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482106  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482193  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.488191  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:49.499538  152463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:49.504332  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:49.510908  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:49.517549  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:49.524925  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:49.531451  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:49.537617  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:49.543680  152463 kubeadm.go:392] StartCluster: {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:49.543776  152463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:49.543843  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.587049  152463 cri.go:89] found id: ""
	I0826 12:10:49.587142  152463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:49.597911  152463 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:49.597936  152463 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:49.598001  152463 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:49.607974  152463 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:49.608976  152463 kubeconfig.go:125] found "no-preload-956479" server: "https://192.168.50.213:8443"
	I0826 12:10:49.611217  152463 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:49.622647  152463 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I0826 12:10:49.622689  152463 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:49.622706  152463 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:49.623002  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.662463  152463 cri.go:89] found id: ""
	I0826 12:10:49.662549  152463 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:49.681134  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:49.691425  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:49.691452  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:49.691512  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:49.701389  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:49.701474  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:49.713195  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:49.722708  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:49.722792  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:49.732905  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:49.742726  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:49.742814  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:48.123021  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:50.123270  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.129441  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:47.629637  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.129381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.630027  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.129789  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.630022  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.130252  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.630145  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.129544  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.629646  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.704947  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:51.705172  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:49.752415  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:49.761573  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:49.761667  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:49.771209  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:49.781057  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:49.889287  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.424782  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.640186  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.713706  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.834409  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:50.834516  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.335630  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.834665  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.851569  152463 api_server.go:72] duration metric: took 1.01717469s to wait for apiserver process to appear ...
	I0826 12:10:51.851601  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:51.851626  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:51.852167  152463 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0826 12:10:52.351709  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.441177  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.441210  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:54.441223  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.451907  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.451937  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:52.623200  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.122552  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:54.852737  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.857641  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:54.857740  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.351825  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.356325  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:55.356364  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.851867  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.858081  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:10:55.865811  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:55.865843  152463 api_server.go:131] duration metric: took 4.014234103s to wait for apiserver health ...
	I0826 12:10:55.865853  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:55.865861  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:55.867818  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:52.129473  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:52.629868  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.129585  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.629893  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.129446  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.629722  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.130173  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.629968  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.129994  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.629422  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.203474  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:56.204271  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.869434  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:55.881376  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:55.935418  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:55.955678  152463 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:55.955721  152463 system_pods.go:61] "coredns-6f6b679f8f-s9685" [b6fca294-8a78-4f7c-a466-11c76362874a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:55.955732  152463 system_pods.go:61] "etcd-no-preload-956479" [96da9402-8ea6-4418-892d-7691ab60a10d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:55.955744  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [5fe3eb03-a50c-4a7b-8c50-37262f1b165f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:55.955752  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [362950c9-4466-413e-8248-053fe4d698a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:55.955759  152463 system_pods.go:61] "kube-proxy-kwpqw" [023fc9f9-538e-43d0-a484-e2f4c75c7f34] Running
	I0826 12:10:55.955769  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [d24580b2-8a37-4aaa-8d9d-66f9eb3e0c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:55.955777  152463 system_pods.go:61] "metrics-server-6867b74b74-ldgsl" [264e96c8-430f-40fc-bb9c-7588cc28bc6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:55.955787  152463 system_pods.go:61] "storage-provisioner" [de97d99d-eda7-4ae4-8051-2fc34a2fe630] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:10:55.955803  152463 system_pods.go:74] duration metric: took 20.359455ms to wait for pod list to return data ...
	I0826 12:10:55.955815  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:55.972694  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:55.972741  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:55.972756  152463 node_conditions.go:105] duration metric: took 16.934705ms to run NodePressure ...
	I0826 12:10:55.972781  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:56.283383  152463 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288149  152463 kubeadm.go:739] kubelet initialised
	I0826 12:10:56.288173  152463 kubeadm.go:740] duration metric: took 4.75919ms waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288183  152463 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:56.292852  152463 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.297832  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297858  152463 pod_ready.go:82] duration metric: took 4.980322ms for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.297868  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297876  152463 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.302936  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302971  152463 pod_ready.go:82] duration metric: took 5.08663ms for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.302987  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302995  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.313684  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313719  152463 pod_ready.go:82] duration metric: took 10.716576ms for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.313733  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313742  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.339570  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339604  152463 pod_ready.go:82] duration metric: took 25.849085ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.339613  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339620  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738759  152463 pod_ready.go:93] pod "kube-proxy-kwpqw" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:56.738786  152463 pod_ready.go:82] duration metric: took 399.156996ms for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738798  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:58.745103  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.623412  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.123226  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.129363  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:57.629878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.129406  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.629611  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.130209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.629354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.130004  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.629599  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.129324  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.629623  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.705336  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:01.206112  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.746646  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.748453  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.623495  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:04.623650  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.129756  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:02.630078  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:02.630168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:02.668634  152982 cri.go:89] found id: ""
	I0826 12:11:02.668665  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.668673  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:02.668680  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:02.668736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:02.707481  152982 cri.go:89] found id: ""
	I0826 12:11:02.707513  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.707524  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:02.707531  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:02.707600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:02.742412  152982 cri.go:89] found id: ""
	I0826 12:11:02.742441  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.742452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:02.742459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:02.742524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:02.783334  152982 cri.go:89] found id: ""
	I0826 12:11:02.783363  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.783374  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:02.783383  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:02.783442  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:02.819550  152982 cri.go:89] found id: ""
	I0826 12:11:02.819578  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.819586  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:02.819592  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:02.819668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:02.857381  152982 cri.go:89] found id: ""
	I0826 12:11:02.857418  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.857429  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:02.857439  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:02.857508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:02.891198  152982 cri.go:89] found id: ""
	I0826 12:11:02.891231  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.891242  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:02.891249  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:02.891328  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:02.925819  152982 cri.go:89] found id: ""
	I0826 12:11:02.925847  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.925856  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:02.925867  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:02.925881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:03.061241  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:03.061287  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:03.061306  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:03.132324  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:03.132364  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:03.176590  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:03.176623  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.229320  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:03.229366  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:05.744686  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:05.758429  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:05.758517  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:05.799162  152982 cri.go:89] found id: ""
	I0826 12:11:05.799200  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.799209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:05.799216  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:05.799270  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:05.839302  152982 cri.go:89] found id: ""
	I0826 12:11:05.839341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.839354  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:05.839362  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:05.839438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:05.900064  152982 cri.go:89] found id: ""
	I0826 12:11:05.900094  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.900102  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:05.900108  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:05.900168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:05.938314  152982 cri.go:89] found id: ""
	I0826 12:11:05.938341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.938350  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:05.938356  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:05.938423  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:05.975644  152982 cri.go:89] found id: ""
	I0826 12:11:05.975679  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.975691  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:05.975699  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:05.975775  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:06.012720  152982 cri.go:89] found id: ""
	I0826 12:11:06.012752  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.012764  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:06.012772  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:06.012848  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:06.048613  152982 cri.go:89] found id: ""
	I0826 12:11:06.048648  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.048656  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:06.048662  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:06.048717  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:06.083136  152982 cri.go:89] found id: ""
	I0826 12:11:06.083171  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.083183  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:06.083195  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:06.083213  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:06.096570  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:06.096616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:06.172561  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:06.172588  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:06.172605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:06.252039  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:06.252081  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:06.291076  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:06.291109  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.705538  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:06.203800  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:05.245839  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.744844  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.745230  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.123518  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.124421  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:08.838693  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:08.853160  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:08.853246  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:08.893024  152982 cri.go:89] found id: ""
	I0826 12:11:08.893058  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.893072  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:08.893083  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:08.893157  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:08.929621  152982 cri.go:89] found id: ""
	I0826 12:11:08.929660  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.929669  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:08.929675  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:08.929744  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:08.965488  152982 cri.go:89] found id: ""
	I0826 12:11:08.965526  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.965541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:08.965550  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:08.965622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:09.001467  152982 cri.go:89] found id: ""
	I0826 12:11:09.001503  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.001515  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:09.001525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:09.001587  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:09.037865  152982 cri.go:89] found id: ""
	I0826 12:11:09.037898  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.037907  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:09.037914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:09.037973  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:09.074537  152982 cri.go:89] found id: ""
	I0826 12:11:09.074571  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.074582  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:09.074591  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:09.074665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:09.111517  152982 cri.go:89] found id: ""
	I0826 12:11:09.111550  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.111561  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:09.111569  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:09.111635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:09.151005  152982 cri.go:89] found id: ""
	I0826 12:11:09.151039  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.151050  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:09.151062  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:09.151079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:09.231625  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:09.231666  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:09.277642  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:09.277685  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:09.326772  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:09.326814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:09.341764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:09.341802  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:09.419087  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:08.203869  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.206872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:12.703516  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.246459  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:11:10.246503  152463 pod_ready.go:82] duration metric: took 13.507695458s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:10.246520  152463 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:12.254439  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:14.752278  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.126604  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:13.622382  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:15.622915  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.920246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:11.933973  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:11.934070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:11.971020  152982 cri.go:89] found id: ""
	I0826 12:11:11.971055  152982 logs.go:276] 0 containers: []
	W0826 12:11:11.971067  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:11.971076  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:11.971147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:12.005639  152982 cri.go:89] found id: ""
	I0826 12:11:12.005669  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.005679  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:12.005687  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:12.005757  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:12.039823  152982 cri.go:89] found id: ""
	I0826 12:11:12.039856  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.039868  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:12.039877  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:12.039954  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:12.075646  152982 cri.go:89] found id: ""
	I0826 12:11:12.075690  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.075702  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:12.075710  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:12.075814  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:12.113810  152982 cri.go:89] found id: ""
	I0826 12:11:12.113838  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.113846  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:12.113852  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:12.113927  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:12.150373  152982 cri.go:89] found id: ""
	I0826 12:11:12.150405  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.150415  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:12.150421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:12.150478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:12.186325  152982 cri.go:89] found id: ""
	I0826 12:11:12.186362  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.186373  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:12.186381  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:12.186444  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:12.221346  152982 cri.go:89] found id: ""
	I0826 12:11:12.221380  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.221392  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:12.221405  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:12.221423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:12.279964  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:12.280006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:12.297102  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:12.297134  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:12.391568  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:12.391593  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:12.391608  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:12.472218  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:12.472259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.012974  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:15.026480  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:15.026553  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:15.060748  152982 cri.go:89] found id: ""
	I0826 12:11:15.060779  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.060787  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:15.060792  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:15.060842  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:15.095611  152982 cri.go:89] found id: ""
	I0826 12:11:15.095644  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.095668  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:15.095683  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:15.095759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:15.130644  152982 cri.go:89] found id: ""
	I0826 12:11:15.130681  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.130692  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:15.130700  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:15.130773  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:15.164343  152982 cri.go:89] found id: ""
	I0826 12:11:15.164375  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.164383  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:15.164391  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:15.164468  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:15.203801  152982 cri.go:89] found id: ""
	I0826 12:11:15.203835  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.203847  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:15.203855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:15.203935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:15.236428  152982 cri.go:89] found id: ""
	I0826 12:11:15.236455  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.236465  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:15.236474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:15.236546  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:15.271307  152982 cri.go:89] found id: ""
	I0826 12:11:15.271345  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.271357  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:15.271365  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:15.271449  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:15.306164  152982 cri.go:89] found id: ""
	I0826 12:11:15.306194  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.306203  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:15.306214  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:15.306228  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:15.319277  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:15.319311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:15.389821  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:15.389853  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:15.389874  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:15.466002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:15.466045  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.506591  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:15.506626  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:14.703938  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.704084  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.753630  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:19.252388  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.124351  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:20.621827  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.061033  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:18.084401  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:18.084478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:18.127327  152982 cri.go:89] found id: ""
	I0826 12:11:18.127360  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.127371  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:18.127380  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:18.127451  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:18.163215  152982 cri.go:89] found id: ""
	I0826 12:11:18.163249  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.163261  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:18.163270  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:18.163330  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:18.198205  152982 cri.go:89] found id: ""
	I0826 12:11:18.198232  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.198241  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:18.198250  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:18.198322  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:18.233245  152982 cri.go:89] found id: ""
	I0826 12:11:18.233279  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.233291  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:18.233299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:18.233366  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:18.266761  152982 cri.go:89] found id: ""
	I0826 12:11:18.266802  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.266825  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:18.266855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:18.266932  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:18.301705  152982 cri.go:89] found id: ""
	I0826 12:11:18.301744  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.301755  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:18.301764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:18.301825  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:18.339384  152982 cri.go:89] found id: ""
	I0826 12:11:18.339413  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.339422  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:18.339428  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:18.339486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:18.374435  152982 cri.go:89] found id: ""
	I0826 12:11:18.374467  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.374475  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:18.374485  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:18.374498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:18.414453  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:18.414506  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:18.468667  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:18.468712  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:18.483366  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:18.483399  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:18.554900  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:18.554930  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:18.554948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.135828  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:21.148610  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:21.148690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:21.184455  152982 cri.go:89] found id: ""
	I0826 12:11:21.184484  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.184494  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:21.184503  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:21.184572  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:21.219762  152982 cri.go:89] found id: ""
	I0826 12:11:21.219808  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.219821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:21.219829  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:21.219914  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:21.258106  152982 cri.go:89] found id: ""
	I0826 12:11:21.258136  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.258147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:21.258154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:21.258221  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:21.293698  152982 cri.go:89] found id: ""
	I0826 12:11:21.293741  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.293753  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:21.293764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:21.293841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:21.328069  152982 cri.go:89] found id: ""
	I0826 12:11:21.328101  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.328115  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:21.328123  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:21.328191  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:21.363723  152982 cri.go:89] found id: ""
	I0826 12:11:21.363757  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.363767  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:21.363776  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:21.363843  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:21.398321  152982 cri.go:89] found id: ""
	I0826 12:11:21.398349  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.398358  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:21.398364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:21.398428  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:21.434139  152982 cri.go:89] found id: ""
	I0826 12:11:21.434169  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.434177  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:21.434189  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:21.434211  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:21.488855  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:21.488900  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:21.503146  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:21.503186  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:21.576190  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:21.576212  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:21.576226  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.660280  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:21.660330  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:19.203558  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.704020  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.254119  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:23.752986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:22.622972  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.623227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.205285  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:24.219929  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:24.220056  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:24.263032  152982 cri.go:89] found id: ""
	I0826 12:11:24.263064  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.263076  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:24.263084  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:24.263154  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:24.301435  152982 cri.go:89] found id: ""
	I0826 12:11:24.301469  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.301479  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:24.301486  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:24.301557  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:24.337463  152982 cri.go:89] found id: ""
	I0826 12:11:24.337494  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.337505  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:24.337513  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:24.337589  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:24.375142  152982 cri.go:89] found id: ""
	I0826 12:11:24.375181  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.375192  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:24.375201  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:24.375277  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:24.414859  152982 cri.go:89] found id: ""
	I0826 12:11:24.414891  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.414902  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:24.414910  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:24.414980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:24.453757  152982 cri.go:89] found id: ""
	I0826 12:11:24.453801  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.453826  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:24.453836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:24.453936  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:24.489558  152982 cri.go:89] found id: ""
	I0826 12:11:24.489592  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.489601  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:24.489606  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:24.489659  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:24.525054  152982 cri.go:89] found id: ""
	I0826 12:11:24.525086  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.525097  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:24.525109  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:24.525131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:24.596120  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:24.596147  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:24.596162  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:24.671993  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:24.672040  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:24.714108  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:24.714139  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:24.764937  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:24.764979  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:23.704101  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.704765  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.759905  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:28.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.121723  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:29.122568  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.280105  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:27.293479  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:27.293569  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:27.335432  152982 cri.go:89] found id: ""
	I0826 12:11:27.335464  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.335477  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:27.335485  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:27.335565  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:27.371729  152982 cri.go:89] found id: ""
	I0826 12:11:27.371763  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.371774  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:27.371783  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:27.371857  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:27.408210  152982 cri.go:89] found id: ""
	I0826 12:11:27.408238  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.408250  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:27.408258  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:27.408327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:27.442135  152982 cri.go:89] found id: ""
	I0826 12:11:27.442170  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.442186  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:27.442196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:27.442266  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:27.476245  152982 cri.go:89] found id: ""
	I0826 12:11:27.476279  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.476290  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:27.476299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:27.476421  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:27.510917  152982 cri.go:89] found id: ""
	I0826 12:11:27.510949  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.510958  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:27.510965  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:27.511033  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:27.552891  152982 cri.go:89] found id: ""
	I0826 12:11:27.552925  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.552933  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:27.552939  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:27.552996  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:27.588303  152982 cri.go:89] found id: ""
	I0826 12:11:27.588339  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.588354  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:27.588365  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:27.588383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:27.666493  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:27.666540  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:27.710139  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:27.710176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:27.761327  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:27.761368  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:27.775628  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:27.775667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:27.851736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.351953  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:30.365614  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:30.365705  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:30.400100  152982 cri.go:89] found id: ""
	I0826 12:11:30.400130  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.400140  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:30.400148  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:30.400224  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:30.433714  152982 cri.go:89] found id: ""
	I0826 12:11:30.433746  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.433762  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:30.433770  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:30.433841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:30.467434  152982 cri.go:89] found id: ""
	I0826 12:11:30.467465  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.467475  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:30.467482  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:30.467549  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:30.501079  152982 cri.go:89] found id: ""
	I0826 12:11:30.501115  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.501128  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:30.501136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:30.501195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:30.536521  152982 cri.go:89] found id: ""
	I0826 12:11:30.536556  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.536568  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:30.536576  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:30.536649  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:30.572998  152982 cri.go:89] found id: ""
	I0826 12:11:30.573030  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.573040  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:30.573048  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:30.573116  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:30.608982  152982 cri.go:89] found id: ""
	I0826 12:11:30.609017  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.609028  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:30.609035  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:30.609110  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:30.648780  152982 cri.go:89] found id: ""
	I0826 12:11:30.648812  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.648824  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:30.648837  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:30.648853  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:30.705822  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:30.705859  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:30.719927  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:30.719956  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:30.799604  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.799633  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:30.799650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:30.876392  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:30.876438  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:28.203982  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.204105  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.703547  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.255268  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.753846  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:31.622470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.623169  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.417878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:33.431323  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:33.431416  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:33.466166  152982 cri.go:89] found id: ""
	I0826 12:11:33.466195  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.466204  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:33.466215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:33.466292  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:33.504322  152982 cri.go:89] found id: ""
	I0826 12:11:33.504351  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.504360  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:33.504367  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:33.504429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:33.542292  152982 cri.go:89] found id: ""
	I0826 12:11:33.542324  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.542332  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:33.542340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:33.542408  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:33.577794  152982 cri.go:89] found id: ""
	I0826 12:11:33.577827  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.577835  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:33.577841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:33.577901  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:33.611525  152982 cri.go:89] found id: ""
	I0826 12:11:33.611561  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.611571  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:33.611580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:33.611661  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:33.650920  152982 cri.go:89] found id: ""
	I0826 12:11:33.650954  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.650966  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:33.650974  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:33.651043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:33.688349  152982 cri.go:89] found id: ""
	I0826 12:11:33.688389  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.688401  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:33.688409  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:33.688479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:33.726501  152982 cri.go:89] found id: ""
	I0826 12:11:33.726533  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.726542  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:33.726553  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:33.726570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:33.740359  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:33.740392  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:33.810992  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:33.811018  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:33.811030  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:33.895742  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:33.895786  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:33.934059  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:33.934090  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.490917  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:36.503916  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:36.504000  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:36.539493  152982 cri.go:89] found id: ""
	I0826 12:11:36.539521  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.539529  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:36.539535  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:36.539597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:36.575517  152982 cri.go:89] found id: ""
	I0826 12:11:36.575556  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.575567  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:36.575576  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:36.575647  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:36.611750  152982 cri.go:89] found id: ""
	I0826 12:11:36.611790  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.611803  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:36.611812  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:36.611880  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:36.649512  152982 cri.go:89] found id: ""
	I0826 12:11:36.649548  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.649561  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:36.649575  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:36.649656  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:36.686741  152982 cri.go:89] found id: ""
	I0826 12:11:36.686774  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.686784  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:36.686791  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:36.686879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:35.204399  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:37.206473  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:34.753931  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.754270  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:39.253118  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.122628  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:38.122940  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:40.623071  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.723395  152982 cri.go:89] found id: ""
	I0826 12:11:36.723423  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.723431  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:36.723438  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:36.723503  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:36.761858  152982 cri.go:89] found id: ""
	I0826 12:11:36.761895  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.761906  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:36.761914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:36.761987  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:36.797265  152982 cri.go:89] found id: ""
	I0826 12:11:36.797297  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.797305  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:36.797315  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:36.797331  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.849263  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:36.849313  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:36.863273  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:36.863305  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:36.935214  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:36.935241  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:36.935259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:37.011799  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:37.011845  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.550075  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:39.563363  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:39.563441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:39.597015  152982 cri.go:89] found id: ""
	I0826 12:11:39.597049  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.597061  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:39.597068  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:39.597138  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:39.634936  152982 cri.go:89] found id: ""
	I0826 12:11:39.634976  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.634988  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:39.634996  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:39.635070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:39.670376  152982 cri.go:89] found id: ""
	I0826 12:11:39.670406  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.670414  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:39.670421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:39.670479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:39.706468  152982 cri.go:89] found id: ""
	I0826 12:11:39.706497  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.706504  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:39.706510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:39.706601  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:39.741133  152982 cri.go:89] found id: ""
	I0826 12:11:39.741166  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.741178  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:39.741187  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:39.741261  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:39.776398  152982 cri.go:89] found id: ""
	I0826 12:11:39.776436  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.776449  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:39.776460  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:39.776533  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:39.811257  152982 cri.go:89] found id: ""
	I0826 12:11:39.811291  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.811305  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:39.811314  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:39.811394  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:39.845825  152982 cri.go:89] found id: ""
	I0826 12:11:39.845858  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.845880  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:39.845893  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:39.845912  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.886439  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:39.886481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:39.936942  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:39.936985  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:39.950459  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:39.950494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:40.022791  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:40.022820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:40.022851  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:39.705276  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.705617  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.253680  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.753495  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.122503  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:45.123917  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:42.602146  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:42.615049  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:42.615124  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:42.655338  152982 cri.go:89] found id: ""
	I0826 12:11:42.655369  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.655377  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:42.655383  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:42.655438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:42.692964  152982 cri.go:89] found id: ""
	I0826 12:11:42.693001  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.693012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:42.693020  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:42.693095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:42.730011  152982 cri.go:89] found id: ""
	I0826 12:11:42.730040  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.730049  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:42.730055  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:42.730119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:42.765304  152982 cri.go:89] found id: ""
	I0826 12:11:42.765333  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.765341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:42.765348  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:42.765406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:42.805860  152982 cri.go:89] found id: ""
	I0826 12:11:42.805900  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.805912  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:42.805921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:42.805984  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:42.844736  152982 cri.go:89] found id: ""
	I0826 12:11:42.844768  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.844779  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:42.844789  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:42.844855  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:42.879760  152982 cri.go:89] found id: ""
	I0826 12:11:42.879790  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.879801  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:42.879809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:42.879873  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:42.918512  152982 cri.go:89] found id: ""
	I0826 12:11:42.918580  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.918595  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:42.918619  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:42.918640  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:42.971381  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:42.971423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:42.986027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:42.986069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:43.058511  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:43.058533  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:43.058548  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:43.137904  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:43.137948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:45.683127  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:45.697237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:45.697323  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:45.737944  152982 cri.go:89] found id: ""
	I0826 12:11:45.737977  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.737989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:45.737997  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:45.738069  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:45.775940  152982 cri.go:89] found id: ""
	I0826 12:11:45.775972  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.775980  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:45.775991  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:45.776047  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:45.811609  152982 cri.go:89] found id: ""
	I0826 12:11:45.811647  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.811658  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:45.811666  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:45.811747  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:45.845566  152982 cri.go:89] found id: ""
	I0826 12:11:45.845600  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.845612  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:45.845620  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:45.845698  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:45.880243  152982 cri.go:89] found id: ""
	I0826 12:11:45.880287  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.880300  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:45.880310  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:45.880406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:45.916121  152982 cri.go:89] found id: ""
	I0826 12:11:45.916150  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.916161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:45.916170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:45.916242  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:45.950397  152982 cri.go:89] found id: ""
	I0826 12:11:45.950430  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.950441  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:45.950449  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:45.950524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:45.987306  152982 cri.go:89] found id: ""
	I0826 12:11:45.987350  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.987363  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:45.987394  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:45.987435  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:46.044580  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:46.044632  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:46.059612  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:46.059648  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:46.133348  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:46.133377  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:46.133396  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:46.217841  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:46.217890  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:44.203535  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.703738  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.252936  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.753329  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:47.623134  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:49.628072  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.758749  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:48.772086  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:48.772172  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:48.806520  152982 cri.go:89] found id: ""
	I0826 12:11:48.806552  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.806563  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:48.806573  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:48.806655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:48.844305  152982 cri.go:89] found id: ""
	I0826 12:11:48.844335  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.844343  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:48.844349  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:48.844409  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:48.882416  152982 cri.go:89] found id: ""
	I0826 12:11:48.882453  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.882462  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:48.882469  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:48.882523  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:48.917756  152982 cri.go:89] found id: ""
	I0826 12:11:48.917798  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.917811  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:48.917818  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:48.917882  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:48.951065  152982 cri.go:89] found id: ""
	I0826 12:11:48.951095  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.951107  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:48.951115  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:48.951185  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:48.984812  152982 cri.go:89] found id: ""
	I0826 12:11:48.984845  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.984857  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:48.984865  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:48.984935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:49.021449  152982 cri.go:89] found id: ""
	I0826 12:11:49.021483  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.021495  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:49.021505  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:49.021579  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:49.053543  152982 cri.go:89] found id: ""
	I0826 12:11:49.053584  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.053596  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:49.053609  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:49.053625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:49.107227  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:49.107269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:49.121370  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:49.121402  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:49.192279  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:49.192323  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:49.192342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:49.267817  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:49.267861  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:49.204182  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.204589  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:50.753778  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.753986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.122110  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:54.122701  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.805801  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:51.821042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:51.821119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:51.863950  152982 cri.go:89] found id: ""
	I0826 12:11:51.863986  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.863999  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:51.864007  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:51.864082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:51.910582  152982 cri.go:89] found id: ""
	I0826 12:11:51.910621  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.910633  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:51.910649  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:51.910708  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:51.946964  152982 cri.go:89] found id: ""
	I0826 12:11:51.947001  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.947014  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:51.947022  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:51.947095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:51.982892  152982 cri.go:89] found id: ""
	I0826 12:11:51.982926  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.982936  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:51.982944  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:51.983016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:52.017975  152982 cri.go:89] found id: ""
	I0826 12:11:52.018000  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.018009  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:52.018015  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:52.018082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:52.053286  152982 cri.go:89] found id: ""
	I0826 12:11:52.053315  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.053323  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:52.053329  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:52.053391  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:52.088088  152982 cri.go:89] found id: ""
	I0826 12:11:52.088131  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.088144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:52.088153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:52.088235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:52.125911  152982 cri.go:89] found id: ""
	I0826 12:11:52.125938  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.125955  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:52.125967  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:52.125984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:52.167172  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:52.167208  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:52.222819  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:52.222871  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:52.237609  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:52.237650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:52.312439  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:52.312473  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:52.312491  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:54.892552  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:54.907733  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:54.907827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:54.945009  152982 cri.go:89] found id: ""
	I0826 12:11:54.945040  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.945050  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:54.945057  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:54.945128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:54.987578  152982 cri.go:89] found id: ""
	I0826 12:11:54.987608  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.987619  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:54.987627  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:54.987702  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:55.021222  152982 cri.go:89] found id: ""
	I0826 12:11:55.021254  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.021266  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:55.021274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:55.021348  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:55.058906  152982 cri.go:89] found id: ""
	I0826 12:11:55.058933  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.058941  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:55.058948  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:55.059017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:55.094689  152982 cri.go:89] found id: ""
	I0826 12:11:55.094720  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.094727  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:55.094734  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:55.094808  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:55.133269  152982 cri.go:89] found id: ""
	I0826 12:11:55.133298  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.133306  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:55.133313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:55.133376  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:55.170456  152982 cri.go:89] found id: ""
	I0826 12:11:55.170491  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.170501  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:55.170510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:55.170584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:55.205421  152982 cri.go:89] found id: ""
	I0826 12:11:55.205453  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.205463  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:55.205474  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:55.205490  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:55.258635  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:55.258672  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:55.272799  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:55.272838  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:55.345916  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:55.345948  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:55.345966  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:55.421677  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:55.421716  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:53.205479  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.703014  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.704352  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.254310  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.753129  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:56.124191  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:58.622612  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.960895  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:57.974338  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:57.974429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:58.010914  152982 cri.go:89] found id: ""
	I0826 12:11:58.010946  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.010955  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:58.010966  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:58.011046  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:58.046393  152982 cri.go:89] found id: ""
	I0826 12:11:58.046437  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.046451  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:58.046457  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:58.046512  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:58.081967  152982 cri.go:89] found id: ""
	I0826 12:11:58.081999  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.082008  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:58.082014  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:58.082074  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:58.118301  152982 cri.go:89] found id: ""
	I0826 12:11:58.118333  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.118344  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:58.118352  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:58.118420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:58.154991  152982 cri.go:89] found id: ""
	I0826 12:11:58.155022  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.155030  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:58.155036  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:58.155095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:58.192768  152982 cri.go:89] found id: ""
	I0826 12:11:58.192814  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.192827  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:58.192836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:58.192911  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:58.230393  152982 cri.go:89] found id: ""
	I0826 12:11:58.230422  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.230433  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:58.230441  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:58.230510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:58.267899  152982 cri.go:89] found id: ""
	I0826 12:11:58.267935  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.267947  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:58.267959  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:58.267976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:58.357819  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:58.357866  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:58.405641  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:58.405682  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:58.458403  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:58.458446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:58.472170  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:58.472209  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:58.544141  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.044595  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:01.059636  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:01.059732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:01.099210  152982 cri.go:89] found id: ""
	I0826 12:12:01.099244  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.099252  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:01.099260  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:01.099315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:01.135865  152982 cri.go:89] found id: ""
	I0826 12:12:01.135895  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.135904  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:01.135915  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:01.135969  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:01.169745  152982 cri.go:89] found id: ""
	I0826 12:12:01.169775  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.169784  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:01.169790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:01.169844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:01.208386  152982 cri.go:89] found id: ""
	I0826 12:12:01.208419  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.208431  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:01.208440  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:01.208508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:01.250695  152982 cri.go:89] found id: ""
	I0826 12:12:01.250727  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.250738  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:01.250746  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:01.250821  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:01.284796  152982 cri.go:89] found id: ""
	I0826 12:12:01.284825  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.284838  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:01.284845  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:01.284904  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:01.318188  152982 cri.go:89] found id: ""
	I0826 12:12:01.318219  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.318233  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:01.318242  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:01.318313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:01.354986  152982 cri.go:89] found id: ""
	I0826 12:12:01.355024  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.355036  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:01.355055  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:01.355073  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:01.406575  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:01.406625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:01.421246  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:01.421299  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:01.500127  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.500160  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:01.500178  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:01.579560  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:01.579605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:00.202896  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.204136  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:59.758855  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.253583  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:01.123695  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:03.622227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.124292  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:04.138317  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:04.138406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:04.172150  152982 cri.go:89] found id: ""
	I0826 12:12:04.172185  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.172197  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:04.172205  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:04.172281  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:04.206215  152982 cri.go:89] found id: ""
	I0826 12:12:04.206245  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.206253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:04.206259  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:04.206314  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:04.245728  152982 cri.go:89] found id: ""
	I0826 12:12:04.245766  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.245780  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:04.245797  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:04.245875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:04.288292  152982 cri.go:89] found id: ""
	I0826 12:12:04.288328  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.288341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:04.288358  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:04.288420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:04.323224  152982 cri.go:89] found id: ""
	I0826 12:12:04.323270  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.323279  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:04.323285  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:04.323353  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:04.356637  152982 cri.go:89] found id: ""
	I0826 12:12:04.356670  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.356681  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:04.356751  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:04.356829  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:04.397159  152982 cri.go:89] found id: ""
	I0826 12:12:04.397202  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.397217  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:04.397225  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:04.397307  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:04.443593  152982 cri.go:89] found id: ""
	I0826 12:12:04.443635  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.443644  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:04.443654  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:04.443667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:04.527790  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:04.527820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:04.527840  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:04.603384  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:04.603426  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:04.642782  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:04.642818  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:04.692196  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:04.692239  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:04.704890  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.204192  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.753969  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.253318  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:09.253759  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:06.123014  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:08.622705  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.208845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:07.221853  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:07.221925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:07.257184  152982 cri.go:89] found id: ""
	I0826 12:12:07.257220  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.257236  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:07.257244  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:07.257313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:07.289962  152982 cri.go:89] found id: ""
	I0826 12:12:07.290000  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.290012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:07.290018  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:07.290082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:07.323408  152982 cri.go:89] found id: ""
	I0826 12:12:07.323440  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.323452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:07.323461  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:07.323527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:07.358324  152982 cri.go:89] found id: ""
	I0826 12:12:07.358353  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.358362  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:07.358368  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:07.358436  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:07.393608  152982 cri.go:89] found id: ""
	I0826 12:12:07.393657  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.393666  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:07.393671  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:07.393739  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:07.427738  152982 cri.go:89] found id: ""
	I0826 12:12:07.427772  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.427782  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:07.427790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:07.427879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:07.466467  152982 cri.go:89] found id: ""
	I0826 12:12:07.466508  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.466520  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:07.466528  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:07.466603  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:07.501589  152982 cri.go:89] found id: ""
	I0826 12:12:07.501630  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.501645  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:07.501658  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:07.501678  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:07.550668  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:07.550708  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:07.564191  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:07.564224  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:07.638593  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:07.638626  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:07.638645  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:07.722262  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:07.722311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:10.265369  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:10.278719  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:10.278807  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:10.314533  152982 cri.go:89] found id: ""
	I0826 12:12:10.314568  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.314581  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:10.314589  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:10.314664  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:10.355983  152982 cri.go:89] found id: ""
	I0826 12:12:10.356014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.356023  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:10.356029  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:10.356091  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:10.391815  152982 cri.go:89] found id: ""
	I0826 12:12:10.391850  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.391860  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:10.391867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:10.391933  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:10.430280  152982 cri.go:89] found id: ""
	I0826 12:12:10.430309  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.430318  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:10.430324  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:10.430383  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:10.467983  152982 cri.go:89] found id: ""
	I0826 12:12:10.468014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.468025  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:10.468034  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:10.468103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:10.501682  152982 cri.go:89] found id: ""
	I0826 12:12:10.501712  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.501720  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:10.501726  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:10.501779  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:10.536760  152982 cri.go:89] found id: ""
	I0826 12:12:10.536790  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.536802  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:10.536810  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:10.536885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:10.572626  152982 cri.go:89] found id: ""
	I0826 12:12:10.572663  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.572677  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:10.572690  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:10.572707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:10.628207  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:10.628242  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:10.641767  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:10.641799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:10.716431  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:10.716463  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:10.716481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:10.801367  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:10.801416  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:09.205156  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.704152  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.754090  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:14.253111  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.122118  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.123368  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:15.623046  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.346625  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:13.359838  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:13.359925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:13.393199  152982 cri.go:89] found id: ""
	I0826 12:12:13.393228  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.393241  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:13.393249  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:13.393321  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:13.429651  152982 cri.go:89] found id: ""
	I0826 12:12:13.429696  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.429709  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:13.429718  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:13.429778  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:13.463913  152982 cri.go:89] found id: ""
	I0826 12:12:13.463947  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.463959  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:13.463967  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:13.464035  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:13.498933  152982 cri.go:89] found id: ""
	I0826 12:12:13.498966  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.498977  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:13.498987  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:13.499064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:13.535136  152982 cri.go:89] found id: ""
	I0826 12:12:13.535166  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.535177  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:13.535185  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:13.535260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:13.573468  152982 cri.go:89] found id: ""
	I0826 12:12:13.573504  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.573516  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:13.573525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:13.573597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:13.612852  152982 cri.go:89] found id: ""
	I0826 12:12:13.612900  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.612913  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:13.612921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:13.612994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:13.649176  152982 cri.go:89] found id: ""
	I0826 12:12:13.649204  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.649220  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:13.649230  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:13.649247  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:13.663880  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:13.663908  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:13.741960  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:13.741982  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:13.741999  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:13.829286  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:13.829342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:13.868186  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:13.868218  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.422802  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:16.436680  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:16.436759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:16.471551  152982 cri.go:89] found id: ""
	I0826 12:12:16.471585  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.471605  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:16.471623  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:16.471695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:16.507468  152982 cri.go:89] found id: ""
	I0826 12:12:16.507504  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.507517  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:16.507526  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:16.507600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:16.542283  152982 cri.go:89] found id: ""
	I0826 12:12:16.542314  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.542325  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:16.542336  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:16.542406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:16.590390  152982 cri.go:89] found id: ""
	I0826 12:12:16.590429  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.590443  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:16.590452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:16.590593  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:16.625344  152982 cri.go:89] found id: ""
	I0826 12:12:16.625371  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.625382  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:16.625389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:16.625463  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:16.660153  152982 cri.go:89] found id: ""
	I0826 12:12:16.660194  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.660204  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:16.660211  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:16.660268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:16.696541  152982 cri.go:89] found id: ""
	I0826 12:12:16.696572  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.696580  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:16.696586  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:16.696655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:14.202993  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.204125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.255066  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:18.752641  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:17.624099  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.122254  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.732416  152982 cri.go:89] found id: ""
	I0826 12:12:16.732448  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.732456  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:16.732469  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:16.732486  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:16.809058  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:16.809106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:16.848200  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:16.848269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.904985  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:16.905033  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:16.918966  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:16.919000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:16.989371  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.490349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:19.502851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:19.502946  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:19.534939  152982 cri.go:89] found id: ""
	I0826 12:12:19.534966  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.534974  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:19.534981  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:19.535036  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:19.567128  152982 cri.go:89] found id: ""
	I0826 12:12:19.567161  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.567177  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:19.567185  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:19.567257  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:19.601548  152982 cri.go:89] found id: ""
	I0826 12:12:19.601580  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.601590  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:19.601598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:19.601670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:19.636903  152982 cri.go:89] found id: ""
	I0826 12:12:19.636930  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.636938  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:19.636949  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:19.637018  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:19.670155  152982 cri.go:89] found id: ""
	I0826 12:12:19.670181  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.670190  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:19.670196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:19.670258  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:19.705052  152982 cri.go:89] found id: ""
	I0826 12:12:19.705079  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.705090  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:19.705099  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:19.705163  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:19.744106  152982 cri.go:89] found id: ""
	I0826 12:12:19.744136  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.744144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:19.744151  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:19.744227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:19.780078  152982 cri.go:89] found id: ""
	I0826 12:12:19.780107  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.780116  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:19.780126  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:19.780138  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:19.831821  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:19.831884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:19.847572  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:19.847610  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:19.924723  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.924745  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:19.924763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:20.001249  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:20.001292  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:18.204529  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.205670  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.703658  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.753284  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.753357  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.122490  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:24.122773  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.540357  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:22.554408  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:22.554483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:22.588270  152982 cri.go:89] found id: ""
	I0826 12:12:22.588298  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.588310  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:22.588329  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:22.588411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:22.623979  152982 cri.go:89] found id: ""
	I0826 12:12:22.624003  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.624011  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:22.624016  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:22.624077  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:22.657151  152982 cri.go:89] found id: ""
	I0826 12:12:22.657185  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.657196  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:22.657204  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:22.657265  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:22.694187  152982 cri.go:89] found id: ""
	I0826 12:12:22.694217  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.694229  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:22.694237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:22.694327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:22.734911  152982 cri.go:89] found id: ""
	I0826 12:12:22.734948  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.734960  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:22.734968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:22.735039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:22.772754  152982 cri.go:89] found id: ""
	I0826 12:12:22.772790  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.772802  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:22.772809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:22.772877  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:22.810340  152982 cri.go:89] found id: ""
	I0826 12:12:22.810376  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.810385  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:22.810392  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:22.810467  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:22.847910  152982 cri.go:89] found id: ""
	I0826 12:12:22.847942  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.847953  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:22.847966  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:22.847984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:22.900871  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:22.900927  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:22.914758  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:22.914790  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:22.981736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:22.981766  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:22.981780  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:23.062669  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:23.062717  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:25.604600  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:25.617474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:25.617584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:25.653870  152982 cri.go:89] found id: ""
	I0826 12:12:25.653904  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.653917  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:25.653925  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:25.653993  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:25.693937  152982 cri.go:89] found id: ""
	I0826 12:12:25.693965  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.693973  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:25.693979  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:25.694039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:25.730590  152982 cri.go:89] found id: ""
	I0826 12:12:25.730622  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.730633  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:25.730640  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:25.730729  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:25.768192  152982 cri.go:89] found id: ""
	I0826 12:12:25.768221  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.768231  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:25.768240  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:25.768296  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:25.808518  152982 cri.go:89] found id: ""
	I0826 12:12:25.808545  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.808553  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:25.808559  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:25.808622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:25.843434  152982 cri.go:89] found id: ""
	I0826 12:12:25.843464  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.843475  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:25.843487  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:25.843561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:25.879093  152982 cri.go:89] found id: ""
	I0826 12:12:25.879124  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.879138  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:25.879146  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:25.879212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:25.915871  152982 cri.go:89] found id: ""
	I0826 12:12:25.915919  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.915932  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:25.915945  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:25.915973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:25.998597  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:25.998652  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:26.038701  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:26.038736  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:26.091618  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:26.091665  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:26.105349  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:26.105383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:26.178337  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:24.704209  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.204036  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:25.253322  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.754717  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:26.123520  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.622019  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.622453  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.679177  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:28.695361  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:28.695455  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:28.734977  152982 cri.go:89] found id: ""
	I0826 12:12:28.735008  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.735026  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:28.735032  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:28.735107  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:28.771634  152982 cri.go:89] found id: ""
	I0826 12:12:28.771665  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.771677  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:28.771685  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:28.771763  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:28.810976  152982 cri.go:89] found id: ""
	I0826 12:12:28.811010  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.811022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:28.811030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:28.811098  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:28.850204  152982 cri.go:89] found id: ""
	I0826 12:12:28.850233  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.850241  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:28.850247  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:28.850300  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:28.888814  152982 cri.go:89] found id: ""
	I0826 12:12:28.888845  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.888852  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:28.888862  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:28.888923  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:28.925203  152982 cri.go:89] found id: ""
	I0826 12:12:28.925251  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.925264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:28.925273  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:28.925359  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:28.963656  152982 cri.go:89] found id: ""
	I0826 12:12:28.963684  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.963700  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:28.963706  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:28.963761  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:28.997644  152982 cri.go:89] found id: ""
	I0826 12:12:28.997677  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.997686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:28.997696  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:28.997711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:29.036668  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:29.036711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:29.089020  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:29.089064  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:29.103051  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:29.103083  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:29.173327  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:29.173363  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:29.173380  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:29.703493  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.709124  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.252850  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:32.254087  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:33.121656  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:35.122979  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.755073  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:31.769098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:31.769194  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:31.811919  152982 cri.go:89] found id: ""
	I0826 12:12:31.811950  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.811970  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:31.811978  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:31.812059  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:31.849728  152982 cri.go:89] found id: ""
	I0826 12:12:31.849760  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.849771  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:31.849778  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:31.849844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:31.884973  152982 cri.go:89] found id: ""
	I0826 12:12:31.885013  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.885022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:31.885030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:31.885088  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:31.925013  152982 cri.go:89] found id: ""
	I0826 12:12:31.925043  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.925052  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:31.925060  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:31.925121  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:31.960066  152982 cri.go:89] found id: ""
	I0826 12:12:31.960101  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.960112  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:31.960130  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:31.960205  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:31.994706  152982 cri.go:89] found id: ""
	I0826 12:12:31.994739  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.994747  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:31.994753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:31.994810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:32.030101  152982 cri.go:89] found id: ""
	I0826 12:12:32.030134  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.030142  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:32.030148  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:32.030213  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:32.064056  152982 cri.go:89] found id: ""
	I0826 12:12:32.064087  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.064095  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:32.064105  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:32.064118  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:32.115930  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:32.115974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:32.144522  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:32.144594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:32.216857  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:32.216886  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:32.216946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:32.293229  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:32.293268  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.833049  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:34.846325  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:34.846389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:34.879253  152982 cri.go:89] found id: ""
	I0826 12:12:34.879282  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.879299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:34.879308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:34.879377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:34.913351  152982 cri.go:89] found id: ""
	I0826 12:12:34.913381  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.913393  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:34.913401  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:34.913487  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:34.946929  152982 cri.go:89] found id: ""
	I0826 12:12:34.946958  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.946966  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:34.946972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:34.947040  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:34.980517  152982 cri.go:89] found id: ""
	I0826 12:12:34.980559  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.980571  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:34.980580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:34.980651  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:35.015853  152982 cri.go:89] found id: ""
	I0826 12:12:35.015886  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.015894  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:35.015909  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:35.015972  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:35.053568  152982 cri.go:89] found id: ""
	I0826 12:12:35.053597  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.053606  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:35.053613  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:35.053667  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:35.091369  152982 cri.go:89] found id: ""
	I0826 12:12:35.091398  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.091408  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:35.091415  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:35.091483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:35.129233  152982 cri.go:89] found id: ""
	I0826 12:12:35.129259  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.129267  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:35.129276  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:35.129288  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:35.181977  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:35.182016  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:35.195780  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:35.195812  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:35.274390  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:35.274416  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:35.274433  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:35.353774  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:35.353819  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.203244  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:36.703229  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:34.754010  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.253336  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.253674  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.622257  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.622967  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.894664  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:37.908390  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:37.908480  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:37.943642  152982 cri.go:89] found id: ""
	I0826 12:12:37.943669  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.943681  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:37.943689  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:37.943759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:37.978371  152982 cri.go:89] found id: ""
	I0826 12:12:37.978407  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.978418  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:37.978426  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:37.978497  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:38.014205  152982 cri.go:89] found id: ""
	I0826 12:12:38.014237  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.014248  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:38.014255  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:38.014326  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:38.048705  152982 cri.go:89] found id: ""
	I0826 12:12:38.048737  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.048748  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:38.048758  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:38.048824  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:38.085009  152982 cri.go:89] found id: ""
	I0826 12:12:38.085039  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.085050  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:38.085058  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:38.085147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:38.125923  152982 cri.go:89] found id: ""
	I0826 12:12:38.125949  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.125960  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:38.125968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:38.126038  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:38.161460  152982 cri.go:89] found id: ""
	I0826 12:12:38.161492  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.161504  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:38.161512  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:38.161584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:38.194433  152982 cri.go:89] found id: ""
	I0826 12:12:38.194462  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.194472  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:38.194481  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:38.194494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.245809  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:38.245854  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:38.261100  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:38.261141  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:38.329187  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:38.329218  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:38.329237  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:38.416798  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:38.416844  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:40.962763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:40.976214  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:40.976287  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:41.010312  152982 cri.go:89] found id: ""
	I0826 12:12:41.010346  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.010356  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:41.010363  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:41.010422  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:41.051708  152982 cri.go:89] found id: ""
	I0826 12:12:41.051738  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.051746  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:41.051753  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:41.051818  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:41.087107  152982 cri.go:89] found id: ""
	I0826 12:12:41.087140  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.087152  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:41.087161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:41.087238  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:41.125099  152982 cri.go:89] found id: ""
	I0826 12:12:41.125132  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.125144  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:41.125153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:41.125216  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:41.160192  152982 cri.go:89] found id: ""
	I0826 12:12:41.160220  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.160227  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:41.160234  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:41.160291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:41.193507  152982 cri.go:89] found id: ""
	I0826 12:12:41.193536  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.193548  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:41.193557  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:41.193650  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:41.235788  152982 cri.go:89] found id: ""
	I0826 12:12:41.235827  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.235835  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:41.235841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:41.235897  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:41.271720  152982 cri.go:89] found id: ""
	I0826 12:12:41.271755  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.271770  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:41.271780  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:41.271794  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:41.285694  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:41.285731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:41.351221  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:41.351245  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:41.351261  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:41.434748  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:41.434792  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:41.472446  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:41.472477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.704389  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.204525  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.752919  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:43.753710  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:42.123210  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.623786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.022222  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:44.036128  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:44.036201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:44.071142  152982 cri.go:89] found id: ""
	I0826 12:12:44.071177  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.071187  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:44.071196  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:44.071267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:44.105068  152982 cri.go:89] found id: ""
	I0826 12:12:44.105101  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.105110  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:44.105116  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:44.105184  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:44.140069  152982 cri.go:89] found id: ""
	I0826 12:12:44.140102  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.140113  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:44.140121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:44.140190  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:44.177686  152982 cri.go:89] found id: ""
	I0826 12:12:44.177724  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.177736  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:44.177744  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:44.177819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:44.214326  152982 cri.go:89] found id: ""
	I0826 12:12:44.214356  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.214364  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:44.214371  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:44.214426  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:44.251675  152982 cri.go:89] found id: ""
	I0826 12:12:44.251703  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.251711  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:44.251718  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:44.251776  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:44.303077  152982 cri.go:89] found id: ""
	I0826 12:12:44.303107  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.303116  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:44.303122  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:44.303183  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:44.355913  152982 cri.go:89] found id: ""
	I0826 12:12:44.355944  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.355952  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:44.355962  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:44.355974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:44.421610  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:44.421653  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:44.435567  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:44.435603  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:44.501406  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:44.501427  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:44.501440  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:44.582723  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:44.582763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:43.703519  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.202958  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.253330  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:48.753043  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.122857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:49.621786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.124026  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:47.139183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:47.139260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:47.175395  152982 cri.go:89] found id: ""
	I0826 12:12:47.175424  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.175440  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:47.175450  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:47.175514  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:47.214536  152982 cri.go:89] found id: ""
	I0826 12:12:47.214568  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.214580  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:47.214588  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:47.214655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:47.255297  152982 cri.go:89] found id: ""
	I0826 12:12:47.255321  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.255329  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:47.255335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:47.255402  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:47.290638  152982 cri.go:89] found id: ""
	I0826 12:12:47.290666  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.290675  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:47.290681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:47.290736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:47.327313  152982 cri.go:89] found id: ""
	I0826 12:12:47.327345  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.327352  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:47.327359  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:47.327425  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:47.366221  152982 cri.go:89] found id: ""
	I0826 12:12:47.366256  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.366264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:47.366274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:47.366331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:47.401043  152982 cri.go:89] found id: ""
	I0826 12:12:47.401077  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.401088  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:47.401095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:47.401166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:47.435800  152982 cri.go:89] found id: ""
	I0826 12:12:47.435837  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.435848  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:47.435860  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:47.435881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:47.487917  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:47.487955  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:47.501696  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:47.501731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:47.569026  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:47.569053  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:47.569069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:47.651002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:47.651049  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:50.192329  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:50.213937  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:50.214017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:50.253835  152982 cri.go:89] found id: ""
	I0826 12:12:50.253868  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.253879  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:50.253890  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:50.253957  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:50.296898  152982 cri.go:89] found id: ""
	I0826 12:12:50.296928  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.296939  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:50.296946  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:50.297016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:50.350327  152982 cri.go:89] found id: ""
	I0826 12:12:50.350356  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.350365  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:50.350375  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:50.350443  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:50.385191  152982 cri.go:89] found id: ""
	I0826 12:12:50.385225  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.385236  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:50.385243  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:50.385309  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:50.418371  152982 cri.go:89] found id: ""
	I0826 12:12:50.418412  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.418423  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:50.418432  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:50.418505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:50.450924  152982 cri.go:89] found id: ""
	I0826 12:12:50.450956  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.450965  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:50.450972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:50.451043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:50.485695  152982 cri.go:89] found id: ""
	I0826 12:12:50.485728  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.485739  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:50.485748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:50.485819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:50.519570  152982 cri.go:89] found id: ""
	I0826 12:12:50.519609  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.519622  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:50.519633  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:50.519650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:50.572959  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:50.573001  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:50.586794  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:50.586826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:50.654148  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:50.654180  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:50.654255  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:50.738067  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:50.738107  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:48.203018  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.205528  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.704054  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.758038  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.252772  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.121906  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:54.622553  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.281246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:53.296023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:53.296103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:53.333031  152982 cri.go:89] found id: ""
	I0826 12:12:53.333073  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.333092  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:53.333100  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:53.333171  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:53.367753  152982 cri.go:89] found id: ""
	I0826 12:12:53.367782  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.367791  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:53.367796  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:53.367849  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:53.403702  152982 cri.go:89] found id: ""
	I0826 12:12:53.403733  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.403745  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:53.403753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:53.403823  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:53.439911  152982 cri.go:89] found id: ""
	I0826 12:12:53.439939  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.439947  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:53.439953  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:53.440008  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:53.475053  152982 cri.go:89] found id: ""
	I0826 12:12:53.475079  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.475088  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:53.475094  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:53.475152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:53.509087  152982 cri.go:89] found id: ""
	I0826 12:12:53.509117  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.509128  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:53.509136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:53.509207  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:53.546090  152982 cri.go:89] found id: ""
	I0826 12:12:53.546123  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.546133  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:53.546139  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:53.546195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:53.581675  152982 cri.go:89] found id: ""
	I0826 12:12:53.581713  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.581727  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:53.581741  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:53.581756  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:53.632866  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:53.632929  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:53.646045  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:53.646079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:53.716768  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:53.716798  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:53.716814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:53.799490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:53.799541  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.340389  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:56.353305  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:56.353377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:56.389690  152982 cri.go:89] found id: ""
	I0826 12:12:56.389725  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.389733  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:56.389741  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:56.389797  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:56.423214  152982 cri.go:89] found id: ""
	I0826 12:12:56.423245  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.423253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:56.423260  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:56.423315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:56.459033  152982 cri.go:89] found id: ""
	I0826 12:12:56.459069  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.459077  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:56.459083  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:56.459141  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:56.494408  152982 cri.go:89] found id: ""
	I0826 12:12:56.494437  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.494446  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:56.494453  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:56.494507  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:56.533471  152982 cri.go:89] found id: ""
	I0826 12:12:56.533506  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.533517  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:56.533525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:56.533595  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:56.572644  152982 cri.go:89] found id: ""
	I0826 12:12:56.572675  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.572685  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:56.572690  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:56.572769  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:56.610948  152982 cri.go:89] found id: ""
	I0826 12:12:56.610978  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.610989  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:56.610997  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:56.611161  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:56.651352  152982 cri.go:89] found id: ""
	I0826 12:12:56.651391  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.651406  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:56.651419  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:56.651446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:56.666627  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:56.666664  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 12:12:54.704640  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:56.704830  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:55.253572  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.754403  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.122603  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.623004  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	W0826 12:12:56.741054  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:56.741087  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:56.741106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:56.818138  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:56.818194  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.858182  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:56.858216  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.412428  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:59.426340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:59.426410  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:59.459975  152982 cri.go:89] found id: ""
	I0826 12:12:59.460011  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.460021  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:59.460027  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:59.460082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:59.491890  152982 cri.go:89] found id: ""
	I0826 12:12:59.491918  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.491928  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:59.491934  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:59.491994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:59.527284  152982 cri.go:89] found id: ""
	I0826 12:12:59.527318  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.527330  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:59.527339  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:59.527411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:59.560996  152982 cri.go:89] found id: ""
	I0826 12:12:59.561027  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.561036  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:59.561042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:59.561096  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:59.595827  152982 cri.go:89] found id: ""
	I0826 12:12:59.595858  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.595866  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:59.595882  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:59.595970  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:59.632943  152982 cri.go:89] found id: ""
	I0826 12:12:59.632981  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.632993  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:59.633001  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:59.633071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:59.669013  152982 cri.go:89] found id: ""
	I0826 12:12:59.669047  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.669057  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:59.669065  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:59.669139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:59.703286  152982 cri.go:89] found id: ""
	I0826 12:12:59.703320  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.703331  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:59.703342  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:59.703359  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.756848  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:59.756882  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:59.770551  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:59.770592  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:59.842153  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:59.842176  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:59.842190  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:59.925190  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:59.925231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:59.203898  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.703960  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.755160  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.252684  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.253046  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.623605  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.122069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.464977  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:02.478901  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:02.478991  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:02.514845  152982 cri.go:89] found id: ""
	I0826 12:13:02.514890  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.514903  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:02.514912  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:02.514980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:02.550867  152982 cri.go:89] found id: ""
	I0826 12:13:02.550899  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.550910  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:02.550918  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:02.550988  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:02.585494  152982 cri.go:89] found id: ""
	I0826 12:13:02.585522  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.585531  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:02.585537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:02.585617  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:02.623561  152982 cri.go:89] found id: ""
	I0826 12:13:02.623603  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.623619  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:02.623630  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:02.623696  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:02.660636  152982 cri.go:89] found id: ""
	I0826 12:13:02.660665  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.660675  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:02.660683  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:02.660760  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:02.696140  152982 cri.go:89] found id: ""
	I0826 12:13:02.696173  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.696184  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:02.696192  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:02.696260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:02.735056  152982 cri.go:89] found id: ""
	I0826 12:13:02.735098  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.735111  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:02.735121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:02.735201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:02.770841  152982 cri.go:89] found id: ""
	I0826 12:13:02.770886  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.770899  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:02.770911  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:02.770928  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:02.845458  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:02.845498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:02.885537  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:02.885574  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:02.935507  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:02.935560  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:02.950010  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:02.950046  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:03.018963  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.520071  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:05.535473  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:05.535554  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:05.572890  152982 cri.go:89] found id: ""
	I0826 12:13:05.572923  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.572934  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:05.572942  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:05.573019  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:05.610469  152982 cri.go:89] found id: ""
	I0826 12:13:05.610503  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.610515  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:05.610522  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:05.610586  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:05.647446  152982 cri.go:89] found id: ""
	I0826 12:13:05.647480  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.647489  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:05.647495  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:05.647561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:05.686619  152982 cri.go:89] found id: ""
	I0826 12:13:05.686660  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.686672  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:05.686681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:05.686754  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:05.725893  152982 cri.go:89] found id: ""
	I0826 12:13:05.725927  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.725936  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:05.725943  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:05.726034  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:05.761052  152982 cri.go:89] found id: ""
	I0826 12:13:05.761081  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.761089  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:05.761095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:05.761147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:05.795336  152982 cri.go:89] found id: ""
	I0826 12:13:05.795367  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.795379  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:05.795387  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:05.795447  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:05.834397  152982 cri.go:89] found id: ""
	I0826 12:13:05.834441  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.834449  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:05.834459  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:05.834472  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:05.847882  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:05.847919  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:05.921941  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.921965  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:05.921982  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:06.001380  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:06.001424  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:06.040519  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:06.040555  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:04.203987  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.704484  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.752615  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.753340  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.122654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.122742  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.123434  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.591761  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:08.604628  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:08.604724  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:08.639915  152982 cri.go:89] found id: ""
	I0826 12:13:08.639948  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.639957  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:08.639963  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:08.640025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:08.684479  152982 cri.go:89] found id: ""
	I0826 12:13:08.684513  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.684526  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:08.684535  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:08.684613  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:08.724083  152982 cri.go:89] found id: ""
	I0826 12:13:08.724112  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.724121  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:08.724127  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:08.724182  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:08.760781  152982 cri.go:89] found id: ""
	I0826 12:13:08.760830  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.760842  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:08.760851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:08.760943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:08.795685  152982 cri.go:89] found id: ""
	I0826 12:13:08.795715  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.795723  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:08.795730  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:08.795786  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:08.832123  152982 cri.go:89] found id: ""
	I0826 12:13:08.832152  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.832161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:08.832167  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:08.832227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:08.869701  152982 cri.go:89] found id: ""
	I0826 12:13:08.869735  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.869752  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:08.869760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:08.869827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:08.905399  152982 cri.go:89] found id: ""
	I0826 12:13:08.905444  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.905455  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:08.905469  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:08.905485  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:08.956814  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:08.956857  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:08.971618  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:08.971656  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:09.039360  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:09.039389  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:09.039407  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:09.113464  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:09.113509  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:11.658989  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:11.671816  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:11.671898  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:11.707124  152982 cri.go:89] found id: ""
	I0826 12:13:11.707150  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.707158  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:11.707165  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:11.707230  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:09.203816  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.203914  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.757254  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:13.252482  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:12.624138  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.123672  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.743127  152982 cri.go:89] found id: ""
	I0826 12:13:11.743155  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.743163  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:11.743169  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:11.743249  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:11.777664  152982 cri.go:89] found id: ""
	I0826 12:13:11.777693  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.777701  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:11.777707  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:11.777766  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:11.811555  152982 cri.go:89] found id: ""
	I0826 12:13:11.811585  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.811593  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:11.811599  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:11.811658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:11.846187  152982 cri.go:89] found id: ""
	I0826 12:13:11.846216  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.846223  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:11.846229  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:11.846291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:11.882261  152982 cri.go:89] found id: ""
	I0826 12:13:11.882292  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.882310  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:11.882318  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:11.882386  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:11.920538  152982 cri.go:89] found id: ""
	I0826 12:13:11.920572  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.920583  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:11.920590  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:11.920658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:11.955402  152982 cri.go:89] found id: ""
	I0826 12:13:11.955435  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.955446  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:11.955456  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:11.955473  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:12.007676  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:12.007723  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:12.021378  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:12.021417  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:12.087841  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:12.087868  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:12.087883  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:12.170948  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:12.170991  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:14.712383  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:14.724904  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:14.724982  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:14.759675  152982 cri.go:89] found id: ""
	I0826 12:13:14.759703  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.759711  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:14.759717  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:14.759784  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:14.794440  152982 cri.go:89] found id: ""
	I0826 12:13:14.794471  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.794480  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:14.794488  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:14.794542  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:14.832392  152982 cri.go:89] found id: ""
	I0826 12:13:14.832442  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.832452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:14.832459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:14.832524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:14.870231  152982 cri.go:89] found id: ""
	I0826 12:13:14.870262  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.870273  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:14.870281  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:14.870339  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:14.909480  152982 cri.go:89] found id: ""
	I0826 12:13:14.909517  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.909529  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:14.909536  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:14.909596  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:14.950957  152982 cri.go:89] found id: ""
	I0826 12:13:14.950986  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.950997  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:14.951005  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:14.951071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:14.995137  152982 cri.go:89] found id: ""
	I0826 12:13:14.995165  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.995176  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:14.995183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:14.995252  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:15.029939  152982 cri.go:89] found id: ""
	I0826 12:13:15.029969  152982 logs.go:276] 0 containers: []
	W0826 12:13:15.029978  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:15.029987  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:15.030000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:15.106633  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:15.106675  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:15.152575  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:15.152613  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:15.205645  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:15.205689  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:15.220325  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:15.220369  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:15.289698  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:13.705307  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:16.203733  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.253098  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.253276  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.752313  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.621549  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.622504  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.790709  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:17.804332  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:17.804398  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:17.839735  152982 cri.go:89] found id: ""
	I0826 12:13:17.839779  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.839791  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:17.839803  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:17.839885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:17.875476  152982 cri.go:89] found id: ""
	I0826 12:13:17.875510  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.875521  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:17.875529  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:17.875606  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:17.911715  152982 cri.go:89] found id: ""
	I0826 12:13:17.911745  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.911753  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:17.911760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:17.911822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:17.949059  152982 cri.go:89] found id: ""
	I0826 12:13:17.949094  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.949102  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:17.949109  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:17.949166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:17.985319  152982 cri.go:89] found id: ""
	I0826 12:13:17.985365  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.985376  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:17.985385  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:17.985481  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:18.019796  152982 cri.go:89] found id: ""
	I0826 12:13:18.019839  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.019858  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:18.019867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:18.019931  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:18.053910  152982 cri.go:89] found id: ""
	I0826 12:13:18.053941  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.053953  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:18.053960  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:18.054039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:18.089854  152982 cri.go:89] found id: ""
	I0826 12:13:18.089888  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.089901  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:18.089917  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:18.089934  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:18.143026  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:18.143070  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.156710  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:18.156740  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:18.222894  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:18.222929  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:18.222946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:18.298729  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:18.298777  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:20.837506  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:20.851070  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:20.851152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:20.886253  152982 cri.go:89] found id: ""
	I0826 12:13:20.886289  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.886299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:20.886308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:20.886384  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:20.923773  152982 cri.go:89] found id: ""
	I0826 12:13:20.923803  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.923821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:20.923827  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:20.923884  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:20.959117  152982 cri.go:89] found id: ""
	I0826 12:13:20.959151  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.959162  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:20.959170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:20.959239  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:20.994088  152982 cri.go:89] found id: ""
	I0826 12:13:20.994121  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.994131  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:20.994138  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:20.994203  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:21.031140  152982 cri.go:89] found id: ""
	I0826 12:13:21.031171  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.031183  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:21.031198  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:21.031267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:21.064624  152982 cri.go:89] found id: ""
	I0826 12:13:21.064654  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.064666  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:21.064674  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:21.064743  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:21.100146  152982 cri.go:89] found id: ""
	I0826 12:13:21.100182  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.100190  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:21.100197  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:21.100268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:21.149001  152982 cri.go:89] found id: ""
	I0826 12:13:21.149031  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.149040  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:21.149054  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:21.149074  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:21.229783  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:21.229809  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:21.229826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:21.305579  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:21.305619  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:21.343856  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:21.343884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:21.394183  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:21.394231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.205132  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:20.704261  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:21.754167  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.253321  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:22.123356  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.621337  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:23.908368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:23.922748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:23.922840  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:23.964168  152982 cri.go:89] found id: ""
	I0826 12:13:23.964199  152982 logs.go:276] 0 containers: []
	W0826 12:13:23.964209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:23.964218  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:23.964290  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:24.001156  152982 cri.go:89] found id: ""
	I0826 12:13:24.001186  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.001199  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:24.001204  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:24.001268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:24.040001  152982 cri.go:89] found id: ""
	I0826 12:13:24.040037  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.040057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:24.040067  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:24.040139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:24.076901  152982 cri.go:89] found id: ""
	I0826 12:13:24.076940  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.076948  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:24.076955  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:24.077028  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:24.129347  152982 cri.go:89] found id: ""
	I0826 12:13:24.129375  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.129383  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:24.129389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:24.129457  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:24.169634  152982 cri.go:89] found id: ""
	I0826 12:13:24.169666  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.169678  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:24.169685  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:24.169740  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:24.206976  152982 cri.go:89] found id: ""
	I0826 12:13:24.207006  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.207015  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:24.207023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:24.207092  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:24.243755  152982 cri.go:89] found id: ""
	I0826 12:13:24.243790  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.243800  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:24.243812  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:24.243829  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:24.323085  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:24.323131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:24.362404  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:24.362436  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:24.411863  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:24.411910  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:24.425742  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:24.425776  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:24.492510  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:23.203855  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:25.704793  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.753722  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:28.753791  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.622857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:29.122053  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.993510  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:27.007233  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:27.007304  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:27.041360  152982 cri.go:89] found id: ""
	I0826 12:13:27.041392  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.041401  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:27.041407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:27.041470  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:27.076040  152982 cri.go:89] found id: ""
	I0826 12:13:27.076069  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.076080  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:27.076088  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:27.076160  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:27.114288  152982 cri.go:89] found id: ""
	I0826 12:13:27.114325  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.114336  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:27.114345  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:27.114418  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:27.148538  152982 cri.go:89] found id: ""
	I0826 12:13:27.148572  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.148582  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:27.148588  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:27.148665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:27.182331  152982 cri.go:89] found id: ""
	I0826 12:13:27.182362  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.182373  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:27.182382  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:27.182453  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:27.218645  152982 cri.go:89] found id: ""
	I0826 12:13:27.218698  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.218710  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:27.218720  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:27.218798  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:27.254987  152982 cri.go:89] found id: ""
	I0826 12:13:27.255021  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.255031  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:27.255037  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:27.255097  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:27.289633  152982 cri.go:89] found id: ""
	I0826 12:13:27.289662  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.289672  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:27.289683  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:27.289705  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:27.338387  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:27.338429  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:27.353764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:27.353799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:27.425833  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:27.425855  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:27.425870  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:27.507035  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:27.507078  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.047763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:30.063283  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:30.063382  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:30.100161  152982 cri.go:89] found id: ""
	I0826 12:13:30.100194  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.100207  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:30.100215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:30.100276  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:30.136507  152982 cri.go:89] found id: ""
	I0826 12:13:30.136542  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.136554  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:30.136561  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:30.136632  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:30.170023  152982 cri.go:89] found id: ""
	I0826 12:13:30.170058  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.170066  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:30.170071  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:30.170128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:30.204979  152982 cri.go:89] found id: ""
	I0826 12:13:30.205022  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.205032  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:30.205062  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:30.205135  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:30.242407  152982 cri.go:89] found id: ""
	I0826 12:13:30.242442  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.242455  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:30.242463  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:30.242532  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:30.280569  152982 cri.go:89] found id: ""
	I0826 12:13:30.280607  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.280619  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:30.280627  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:30.280684  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:30.317846  152982 cri.go:89] found id: ""
	I0826 12:13:30.317882  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.317892  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:30.317906  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:30.318011  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:30.354637  152982 cri.go:89] found id: ""
	I0826 12:13:30.354675  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.354686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:30.354698  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:30.354715  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:30.434983  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:30.435032  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.474170  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:30.474214  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:30.541092  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:30.541133  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:30.566671  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:30.566707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:30.659622  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:28.203031  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.204134  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:32.703767  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.754563  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.253557  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:31.122121  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.125357  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.622870  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.160831  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:33.174476  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:33.174556  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:33.213402  152982 cri.go:89] found id: ""
	I0826 12:13:33.213433  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.213441  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:33.213447  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:33.213505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:33.251024  152982 cri.go:89] found id: ""
	I0826 12:13:33.251056  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.251064  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:33.251070  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:33.251134  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:33.288839  152982 cri.go:89] found id: ""
	I0826 12:13:33.288873  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.288882  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:33.288889  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:33.288961  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:33.324289  152982 cri.go:89] found id: ""
	I0826 12:13:33.324321  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.324329  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:33.324335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:33.324404  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:33.358921  152982 cri.go:89] found id: ""
	I0826 12:13:33.358953  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.358961  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:33.358968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:33.359025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:33.394579  152982 cri.go:89] found id: ""
	I0826 12:13:33.394615  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.394623  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:33.394629  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:33.394700  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:33.429750  152982 cri.go:89] found id: ""
	I0826 12:13:33.429782  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.429794  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:33.429802  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:33.429863  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:33.465857  152982 cri.go:89] found id: ""
	I0826 12:13:33.465895  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.465908  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:33.465921  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:33.465939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:33.506312  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:33.506344  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:33.557235  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:33.557279  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:33.570259  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:33.570293  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:33.638927  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:33.638952  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:33.638973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:36.217153  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:36.230544  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:36.230630  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:36.283359  152982 cri.go:89] found id: ""
	I0826 12:13:36.283394  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.283405  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:36.283413  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:36.283486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:36.327991  152982 cri.go:89] found id: ""
	I0826 12:13:36.328017  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.328026  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:36.328031  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:36.328095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:36.380106  152982 cri.go:89] found id: ""
	I0826 12:13:36.380137  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.380147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:36.380154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:36.380212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:36.415844  152982 cri.go:89] found id: ""
	I0826 12:13:36.415872  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.415880  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:36.415886  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:36.415939  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:36.451058  152982 cri.go:89] found id: ""
	I0826 12:13:36.451131  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.451158  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:36.451168  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:36.451235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:36.485814  152982 cri.go:89] found id: ""
	I0826 12:13:36.485845  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.485856  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:36.485864  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:36.485943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:36.520811  152982 cri.go:89] found id: ""
	I0826 12:13:36.520848  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.520865  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:36.520876  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:36.520952  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:36.557835  152982 cri.go:89] found id: ""
	I0826 12:13:36.557866  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.557877  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:36.557897  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:36.557915  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:36.609551  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:36.609594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:36.624424  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:36.624453  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:36.697267  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:36.697294  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:36.697312  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:34.704284  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.203717  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.752752  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:38.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.622907  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.121820  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:36.781810  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:36.781862  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.326306  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:39.340161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:39.340229  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:39.373614  152982 cri.go:89] found id: ""
	I0826 12:13:39.373646  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.373655  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:39.373664  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:39.373732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:39.408021  152982 cri.go:89] found id: ""
	I0826 12:13:39.408059  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.408067  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:39.408073  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:39.408127  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:39.450503  152982 cri.go:89] found id: ""
	I0826 12:13:39.450531  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.450541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:39.450549  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:39.450624  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:39.487553  152982 cri.go:89] found id: ""
	I0826 12:13:39.487585  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.487596  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:39.487625  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:39.487695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:39.524701  152982 cri.go:89] found id: ""
	I0826 12:13:39.524734  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.524745  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:39.524753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:39.524822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:39.557863  152982 cri.go:89] found id: ""
	I0826 12:13:39.557893  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.557903  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:39.557911  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:39.557979  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:39.593456  152982 cri.go:89] found id: ""
	I0826 12:13:39.593486  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.593496  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:39.593504  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:39.593577  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:39.628444  152982 cri.go:89] found id: ""
	I0826 12:13:39.628472  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.628481  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:39.628490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:39.628503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.668929  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:39.668967  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:39.724948  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:39.725003  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:39.740014  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:39.740060  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:39.814786  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:39.814811  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:39.814828  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:39.704050  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:41.704769  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.752827  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.753423  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.122285  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.622043  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.393781  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:42.407529  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:42.407620  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:42.444273  152982 cri.go:89] found id: ""
	I0826 12:13:42.444305  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.444314  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:42.444321  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:42.444389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:42.478683  152982 cri.go:89] found id: ""
	I0826 12:13:42.478724  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.478734  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:42.478741  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:42.478803  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:42.520650  152982 cri.go:89] found id: ""
	I0826 12:13:42.520684  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.520708  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:42.520715  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:42.520774  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:42.558610  152982 cri.go:89] found id: ""
	I0826 12:13:42.558656  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.558667  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:42.558677  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:42.558750  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:42.593960  152982 cri.go:89] found id: ""
	I0826 12:13:42.593991  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.593999  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:42.594006  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:42.594064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:42.628257  152982 cri.go:89] found id: ""
	I0826 12:13:42.628284  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.628294  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:42.628300  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:42.628372  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:42.669894  152982 cri.go:89] found id: ""
	I0826 12:13:42.669933  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.669946  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:42.669956  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:42.670029  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:42.707893  152982 cri.go:89] found id: ""
	I0826 12:13:42.707923  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.707934  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:42.707946  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:42.707962  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:42.760778  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:42.760823  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:42.773718  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:42.773753  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:42.855780  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:42.855813  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:42.855831  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:42.934872  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:42.934925  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:45.473505  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:45.488485  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:45.488582  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:45.524355  152982 cri.go:89] found id: ""
	I0826 12:13:45.524387  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.524398  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:45.524407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:45.524474  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:45.563731  152982 cri.go:89] found id: ""
	I0826 12:13:45.563758  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.563767  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:45.563772  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:45.563832  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:45.595876  152982 cri.go:89] found id: ""
	I0826 12:13:45.595910  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.595918  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:45.595924  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:45.595977  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:45.629212  152982 cri.go:89] found id: ""
	I0826 12:13:45.629246  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.629256  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:45.629262  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:45.629316  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:45.662718  152982 cri.go:89] found id: ""
	I0826 12:13:45.662748  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.662759  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:45.662766  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:45.662851  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:45.697540  152982 cri.go:89] found id: ""
	I0826 12:13:45.697573  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.697585  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:45.697598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:45.697670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:45.738012  152982 cri.go:89] found id: ""
	I0826 12:13:45.738054  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.738067  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:45.738077  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:45.738174  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:45.778322  152982 cri.go:89] found id: ""
	I0826 12:13:45.778352  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.778364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:45.778376  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:45.778395  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:45.830530  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:45.830570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:45.845289  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:45.845335  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:45.918490  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:45.918514  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:45.918528  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:45.998762  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:45.998806  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:44.204527  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.204789  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.753605  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.754396  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.255176  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.622584  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.122691  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:48.540076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:48.554537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:48.554616  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:48.589750  152982 cri.go:89] found id: ""
	I0826 12:13:48.589783  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.589792  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:48.589799  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:48.589866  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.628496  152982 cri.go:89] found id: ""
	I0826 12:13:48.628530  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.628540  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:48.628557  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:48.628635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:48.670630  152982 cri.go:89] found id: ""
	I0826 12:13:48.670667  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.670678  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:48.670686  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:48.670756  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:48.707510  152982 cri.go:89] found id: ""
	I0826 12:13:48.707543  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.707564  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:48.707572  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:48.707642  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:48.752189  152982 cri.go:89] found id: ""
	I0826 12:13:48.752222  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.752231  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:48.752237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:48.752306  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:48.788294  152982 cri.go:89] found id: ""
	I0826 12:13:48.788332  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.788356  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:48.788364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:48.788439  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:48.822728  152982 cri.go:89] found id: ""
	I0826 12:13:48.822755  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.822765  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:48.822771  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:48.822850  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:48.859237  152982 cri.go:89] found id: ""
	I0826 12:13:48.859270  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.859280  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:48.859293  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:48.859310  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:48.944271  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:48.944322  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:48.983438  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:48.983477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:49.036463  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:49.036511  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:49.051081  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:49.051123  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:49.127953  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:51.629023  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:51.643644  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:51.643728  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:51.684273  152982 cri.go:89] found id: ""
	I0826 12:13:51.684310  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.684323  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:51.684331  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:51.684401  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.703794  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:50.703872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:52.705329  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.753669  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.252960  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.623221  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.121867  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.720561  152982 cri.go:89] found id: ""
	I0826 12:13:51.720600  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.720610  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:51.720616  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:51.720690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:51.758023  152982 cri.go:89] found id: ""
	I0826 12:13:51.758049  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.758057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:51.758063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:51.758123  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:51.797029  152982 cri.go:89] found id: ""
	I0826 12:13:51.797063  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.797075  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:51.797082  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:51.797150  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:51.832002  152982 cri.go:89] found id: ""
	I0826 12:13:51.832032  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.832043  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:51.832051  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:51.832122  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:51.867042  152982 cri.go:89] found id: ""
	I0826 12:13:51.867074  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.867083  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:51.867090  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:51.867155  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:51.904887  152982 cri.go:89] found id: ""
	I0826 12:13:51.904919  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.904931  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:51.904938  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:51.905005  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:51.940628  152982 cri.go:89] found id: ""
	I0826 12:13:51.940662  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.940674  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:51.940686  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:51.940703  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:51.979988  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:51.980021  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:52.033297  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:52.033338  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:52.047004  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:52.047039  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:52.126136  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:52.126163  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:52.126176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:54.711457  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:54.726419  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:54.726510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:54.773253  152982 cri.go:89] found id: ""
	I0826 12:13:54.773290  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.773304  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:54.773324  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:54.773397  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:54.812175  152982 cri.go:89] found id: ""
	I0826 12:13:54.812211  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.812232  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:54.812239  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:54.812298  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:54.848673  152982 cri.go:89] found id: ""
	I0826 12:13:54.848702  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.848710  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:54.848717  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:54.848782  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:54.884211  152982 cri.go:89] found id: ""
	I0826 12:13:54.884239  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.884252  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:54.884259  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:54.884329  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:54.925279  152982 cri.go:89] found id: ""
	I0826 12:13:54.925312  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.925323  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:54.925331  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:54.925406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:54.961004  152982 cri.go:89] found id: ""
	I0826 12:13:54.961035  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.961043  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:54.961050  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:54.961114  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:54.998689  152982 cri.go:89] found id: ""
	I0826 12:13:54.998720  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.998730  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:54.998737  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:54.998810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:55.033540  152982 cri.go:89] found id: ""
	I0826 12:13:55.033671  152982 logs.go:276] 0 containers: []
	W0826 12:13:55.033683  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:55.033696  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:55.033713  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:55.082966  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:55.083006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:55.096472  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:55.096503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:55.166868  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:55.166899  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:55.166917  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:55.260596  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:55.260637  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:55.206106  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.704214  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.253114  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.254749  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.122385  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.124183  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:00.622721  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.804727  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:57.818098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:57.818188  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:57.852777  152982 cri.go:89] found id: ""
	I0826 12:13:57.852819  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.852832  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:57.852841  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:57.852906  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:57.888778  152982 cri.go:89] found id: ""
	I0826 12:13:57.888815  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.888832  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:57.888840  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:57.888924  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:57.927398  152982 cri.go:89] found id: ""
	I0826 12:13:57.927432  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.927444  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:57.927452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:57.927527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:57.965373  152982 cri.go:89] found id: ""
	I0826 12:13:57.965402  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.965420  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:57.965425  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:57.965488  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:57.999334  152982 cri.go:89] found id: ""
	I0826 12:13:57.999366  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.999374  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:57.999380  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:57.999441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:58.035268  152982 cri.go:89] found id: ""
	I0826 12:13:58.035299  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.035308  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:58.035313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:58.035373  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:58.070055  152982 cri.go:89] found id: ""
	I0826 12:13:58.070088  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.070099  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:58.070107  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:58.070176  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:58.104845  152982 cri.go:89] found id: ""
	I0826 12:13:58.104882  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.104893  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:58.104906  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:58.104923  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:58.149392  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:58.149427  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:58.201310  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:58.201345  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:58.217027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:58.217067  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:58.301347  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.301372  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:58.301389  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:00.881924  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:00.897716  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:14:00.897804  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:14:00.934959  152982 cri.go:89] found id: ""
	I0826 12:14:00.934993  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.935005  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:14:00.935013  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:14:00.935086  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:14:00.969225  152982 cri.go:89] found id: ""
	I0826 12:14:00.969257  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.969266  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:14:00.969272  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:14:00.969344  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:14:01.004010  152982 cri.go:89] found id: ""
	I0826 12:14:01.004047  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.004057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:14:01.004063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:14:01.004136  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:14:01.039659  152982 cri.go:89] found id: ""
	I0826 12:14:01.039689  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.039697  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:14:01.039704  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:14:01.039758  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:14:01.073234  152982 cri.go:89] found id: ""
	I0826 12:14:01.073266  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.073278  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:14:01.073293  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:14:01.073370  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:14:01.111187  152982 cri.go:89] found id: ""
	I0826 12:14:01.111229  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.111243  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:14:01.111261  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:14:01.111331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:14:01.145754  152982 cri.go:89] found id: ""
	I0826 12:14:01.145791  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.145803  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:14:01.145811  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:14:01.145885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:14:01.182342  152982 cri.go:89] found id: ""
	I0826 12:14:01.182386  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.182398  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:14:01.182412  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:14:01.182434  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:01.266710  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:14:01.266754  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:14:01.305346  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:14:01.305385  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:14:01.356704  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:14:01.356745  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:14:01.370117  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:14:01.370149  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:14:01.440661  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.198044  152550 pod_ready.go:82] duration metric: took 4m0.000989551s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	E0826 12:13:58.198094  152550 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:13:58.198117  152550 pod_ready.go:39] duration metric: took 4m12.634931094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:13:58.198155  152550 kubeadm.go:597] duration metric: took 4m20.008849713s to restartPrimaryControlPlane
	W0826 12:13:58.198303  152550 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:13:58.198455  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:00.756478  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.253496  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.941691  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:03.956386  152982 kubeadm.go:597] duration metric: took 4m3.440941217s to restartPrimaryControlPlane
	W0826 12:14:03.956466  152982 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:03.956493  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:04.426489  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:04.441881  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:04.452877  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:04.463304  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:04.463332  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:04.463380  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:04.473208  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:04.473290  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:04.483666  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:04.494051  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:04.494177  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:04.504320  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.514099  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:04.514174  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.524235  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:04.533899  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:04.533984  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:04.544851  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:04.618397  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:14:04.618498  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:04.760383  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:04.760547  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:04.760690  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:14:04.953284  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:02.622852  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:05.122408  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:04.955371  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:04.955481  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:04.955563  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:04.955664  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:04.955738  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:04.955850  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:04.955953  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:04.956047  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:04.956133  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:04.956239  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:04.956306  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:04.956366  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:04.956455  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:05.401019  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:05.543601  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:05.641242  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:05.716524  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:05.737543  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:05.739428  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:05.739530  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:05.887203  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:05.889144  152982 out.go:235]   - Booting up control plane ...
	I0826 12:14:05.889288  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:05.891248  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:05.892518  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:05.894610  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:05.899134  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:14:05.753455  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.754033  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.622166  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:09.623006  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:10.253568  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.255058  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.122796  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.622774  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.753807  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.253632  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.254808  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.123304  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.622567  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.257450  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.752912  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.623069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.624561  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.253685  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.752880  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.122470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.623195  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:29.414342  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.215853526s)
	I0826 12:14:29.414450  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:29.436730  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:29.449421  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:29.462320  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:29.462349  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:29.462411  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:29.473119  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:29.473189  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:29.493795  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:29.516473  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:29.516563  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:29.528887  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.537934  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:29.538011  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.548384  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:29.557588  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:29.557659  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:29.567544  152550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:29.611274  152550 kubeadm.go:310] W0826 12:14:29.589660    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.612346  152550 kubeadm.go:310] W0826 12:14:29.590990    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.731352  152550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:14:30.755803  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.252679  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:31.123036  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.623654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:35.623993  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:38.120098  152550 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:14:38.120187  152550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:38.120283  152550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:38.120428  152550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:38.120548  152550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:14:38.120643  152550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:38.122417  152550 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:38.122519  152550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:38.122590  152550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:38.122681  152550 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:38.122766  152550 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:38.122884  152550 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:38.122960  152550 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:38.123047  152550 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:38.123146  152550 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:38.123242  152550 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:38.123316  152550 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:38.123350  152550 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:38.123394  152550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:38.123481  152550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:38.123531  152550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:14:38.123602  152550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:38.123656  152550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:38.123702  152550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:38.123770  152550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:38.123830  152550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:38.126005  152550 out.go:235]   - Booting up control plane ...
	I0826 12:14:38.126111  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:38.126209  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:38.126293  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:38.126433  152550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:38.126541  152550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:38.126619  152550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:38.126796  152550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:14:38.126975  152550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:14:38.127064  152550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001663066s
	I0826 12:14:38.127156  152550 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:14:38.127239  152550 kubeadm.go:310] [api-check] The API server is healthy after 4.502197821s
	I0826 12:14:38.127376  152550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:14:38.127527  152550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:14:38.127622  152550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:14:38.127799  152550 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-923586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:14:38.127882  152550 kubeadm.go:310] [bootstrap-token] Using token: uk5nes.r9l047sx2ciq7ja8
	I0826 12:14:38.129135  152550 out.go:235]   - Configuring RBAC rules ...
	I0826 12:14:38.129255  152550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:14:38.129363  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:14:38.129493  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:14:38.129668  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:14:38.129810  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:14:38.129908  152550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:14:38.130016  152550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:14:38.130071  152550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:14:38.130114  152550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:14:38.130120  152550 kubeadm.go:310] 
	I0826 12:14:38.130173  152550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:14:38.130178  152550 kubeadm.go:310] 
	I0826 12:14:38.130239  152550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:14:38.130249  152550 kubeadm.go:310] 
	I0826 12:14:38.130269  152550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:14:38.130340  152550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:14:38.130414  152550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:14:38.130424  152550 kubeadm.go:310] 
	I0826 12:14:38.130501  152550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:14:38.130515  152550 kubeadm.go:310] 
	I0826 12:14:38.130583  152550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:14:38.130595  152550 kubeadm.go:310] 
	I0826 12:14:38.130676  152550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:14:38.130774  152550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:14:38.130889  152550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:14:38.130898  152550 kubeadm.go:310] 
	I0826 12:14:38.130984  152550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:14:38.131067  152550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:14:38.131086  152550 kubeadm.go:310] 
	I0826 12:14:38.131158  152550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131276  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:14:38.131297  152550 kubeadm.go:310] 	--control-plane 
	I0826 12:14:38.131301  152550 kubeadm.go:310] 
	I0826 12:14:38.131407  152550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:14:38.131419  152550 kubeadm.go:310] 
	I0826 12:14:38.131518  152550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131634  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:14:38.131651  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:14:38.131664  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:14:38.133846  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:14:35.752863  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.752967  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.116222  153366 pod_ready.go:82] duration metric: took 4m0.000438014s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	E0826 12:14:37.116261  153366 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:14:37.116289  153366 pod_ready.go:39] duration metric: took 4m10.542468189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:37.116344  153366 kubeadm.go:597] duration metric: took 4m19.458712933s to restartPrimaryControlPlane
	W0826 12:14:37.116458  153366 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:37.116493  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:38.135291  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:14:38.146512  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:14:38.165564  152550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:14:38.165694  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.165744  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-923586 minikube.k8s.io/updated_at=2024_08_26T12_14_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=embed-certs-923586 minikube.k8s.io/primary=true
	I0826 12:14:38.409452  152550 ops.go:34] apiserver oom_adj: -16
	I0826 12:14:38.409559  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.910300  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.410434  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.909691  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.410601  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.910375  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.410502  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.909663  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.409954  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.515793  152550 kubeadm.go:1113] duration metric: took 4.350161994s to wait for elevateKubeSystemPrivileges
	I0826 12:14:42.515834  152550 kubeadm.go:394] duration metric: took 5m4.371327443s to StartCluster
	I0826 12:14:42.515878  152550 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.515970  152550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:14:42.517781  152550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.518064  152550 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:14:42.518189  152550 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:14:42.518281  152550 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-923586"
	I0826 12:14:42.518296  152550 addons.go:69] Setting default-storageclass=true in profile "embed-certs-923586"
	I0826 12:14:42.518309  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:14:42.518339  152550 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-923586"
	W0826 12:14:42.518352  152550 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:14:42.518362  152550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-923586"
	I0826 12:14:42.518383  152550 addons.go:69] Setting metrics-server=true in profile "embed-certs-923586"
	I0826 12:14:42.518405  152550 addons.go:234] Setting addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:42.518409  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	W0826 12:14:42.518418  152550 addons.go:243] addon metrics-server should already be in state true
	I0826 12:14:42.518446  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.518852  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518865  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518829  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518905  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.519968  152550 out.go:177] * Verifying Kubernetes components...
	I0826 12:14:42.521761  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:14:42.537559  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0826 12:14:42.538127  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.538827  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.538891  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.539336  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.539636  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.540538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0826 12:14:42.540644  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0826 12:14:42.541179  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541244  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541681  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541695  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.541834  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541842  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.542936  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.542979  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.543441  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543490  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543551  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543577  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543637  152550 addons.go:234] Setting addon default-storageclass=true in "embed-certs-923586"
	W0826 12:14:42.543663  152550 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:14:42.543700  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.544040  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.544067  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.561871  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0826 12:14:42.562432  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.562957  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.562971  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.563394  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.563689  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.565675  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.565857  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0826 12:14:42.565980  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0826 12:14:42.566268  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566352  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566799  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.566815  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567209  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567364  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.567386  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567775  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567779  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.567855  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.567903  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.568183  152550 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:14:42.569717  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.569832  152550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.569854  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:14:42.569876  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.571655  152550 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:14:42.572951  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.572975  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:14:42.572988  152550 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:14:42.573009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.573393  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.573434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.573818  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.574020  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.574160  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.574454  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.576356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.576762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.576782  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.577099  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.577293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.577430  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.577564  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.586538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0826 12:14:42.587087  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.587574  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.587590  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.587849  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.588001  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.589835  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.590061  152550 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.590075  152550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:14:42.590089  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.592573  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.592861  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.592952  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.593269  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.593437  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.593541  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.593637  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.772651  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:14:42.795921  152550 node_ready.go:35] waiting up to 6m0s for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831853  152550 node_ready.go:49] node "embed-certs-923586" has status "Ready":"True"
	I0826 12:14:42.831881  152550 node_ready.go:38] duration metric: took 35.920093ms for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831893  152550 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:42.856949  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:42.924562  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.940640  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:14:42.940669  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:14:42.958680  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.975446  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:14:42.975481  152550 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:14:43.037862  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:43.037891  152550 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:14:43.105738  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:44.054921  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130312138s)
	I0826 12:14:44.054995  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055025  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096305238s)
	I0826 12:14:44.055070  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055087  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055330  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055394  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055408  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055416  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055423  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055444  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055395  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055498  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055512  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055520  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055719  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055724  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055734  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055858  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055898  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055923  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.075068  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.075100  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.075404  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.075424  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478321  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.372540463s)
	I0826 12:14:44.478382  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478402  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.478806  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.478864  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.478876  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478891  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478904  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.479161  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.479161  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.479189  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.479205  152550 addons.go:475] Verifying addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:44.482190  152550 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:14:40.254480  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:42.753499  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:45.900198  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:14:45.901204  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:45.901550  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:44.483577  152550 addons.go:510] duration metric: took 1.965385921s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:14:44.876221  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:44.876253  152550 pod_ready.go:82] duration metric: took 2.019275302s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.876270  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883514  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.883542  152550 pod_ready.go:82] duration metric: took 1.007263784s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883553  152550 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890724  152550 pod_ready.go:93] pod "etcd-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.890750  152550 pod_ready.go:82] duration metric: took 7.190212ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890760  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.754815  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.252702  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:49.254411  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.897138  152550 pod_ready.go:103] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:48.897502  152550 pod_ready.go:93] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:48.897529  152550 pod_ready.go:82] duration metric: took 3.006762275s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:48.897541  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905832  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.905858  152550 pod_ready.go:82] duration metric: took 2.008310051s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905870  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912639  152550 pod_ready.go:93] pod "kube-proxy-xnv2b" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.912679  152550 pod_ready.go:82] duration metric: took 6.793285ms for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912694  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918794  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.918819  152550 pod_ready.go:82] duration metric: took 6.117525ms for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918826  152550 pod_ready.go:39] duration metric: took 8.086922463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:50.918867  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:14:50.918928  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:50.936095  152550 api_server.go:72] duration metric: took 8.41799252s to wait for apiserver process to appear ...
	I0826 12:14:50.936126  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:14:50.936155  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:14:50.941142  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:14:50.942612  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:14:50.942653  152550 api_server.go:131] duration metric: took 6.519342ms to wait for apiserver health ...
	I0826 12:14:50.942664  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:14:50.947646  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:14:50.947675  152550 system_pods.go:61] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:50.947680  152550 system_pods.go:61] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:50.947684  152550 system_pods.go:61] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:50.947688  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:50.947691  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:50.947694  152550 system_pods.go:61] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:50.947699  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:50.947705  152550 system_pods.go:61] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:50.947709  152550 system_pods.go:61] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:50.947717  152550 system_pods.go:74] duration metric: took 5.046771ms to wait for pod list to return data ...
	I0826 12:14:50.947723  152550 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:14:50.950716  152550 default_sa.go:45] found service account: "default"
	I0826 12:14:50.950744  152550 default_sa.go:55] duration metric: took 3.014513ms for default service account to be created ...
	I0826 12:14:50.950756  152550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:14:51.063812  152550 system_pods.go:86] 9 kube-system pods found
	I0826 12:14:51.063849  152550 system_pods.go:89] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:51.063858  152550 system_pods.go:89] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:51.063864  152550 system_pods.go:89] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:51.063869  152550 system_pods.go:89] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:51.063875  152550 system_pods.go:89] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:51.063880  152550 system_pods.go:89] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:51.063886  152550 system_pods.go:89] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:51.063894  152550 system_pods.go:89] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:51.063901  152550 system_pods.go:89] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:51.063914  152550 system_pods.go:126] duration metric: took 113.151196ms to wait for k8s-apps to be running ...
	I0826 12:14:51.063925  152550 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:14:51.063978  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:51.079783  152550 system_svc.go:56] duration metric: took 15.845401ms WaitForService to wait for kubelet
	I0826 12:14:51.079821  152550 kubeadm.go:582] duration metric: took 8.56172531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:14:51.079848  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:14:51.262166  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:14:51.262194  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:14:51.262233  152550 node_conditions.go:105] duration metric: took 182.377973ms to run NodePressure ...
	I0826 12:14:51.262248  152550 start.go:241] waiting for startup goroutines ...
	I0826 12:14:51.262258  152550 start.go:246] waiting for cluster config update ...
	I0826 12:14:51.262272  152550 start.go:255] writing updated cluster config ...
	I0826 12:14:51.262587  152550 ssh_runner.go:195] Run: rm -f paused
	I0826 12:14:51.317881  152550 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:14:51.319950  152550 out.go:177] * Done! kubectl is now configured to use "embed-certs-923586" cluster and "default" namespace by default
	I0826 12:14:50.901903  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:50.902179  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:51.256756  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:53.755801  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:56.253848  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:58.254315  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:00.902494  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:00.902754  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:03.257214  153366 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140694693s)
	I0826 12:15:03.257298  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:03.273530  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:03.284370  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:03.294199  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:03.294221  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:03.294270  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:15:03.303856  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:03.303938  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:03.313935  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:15:03.323395  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:03.323477  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:03.333728  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.343369  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:03.343452  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.353456  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:15:03.363384  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:03.363472  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:03.373738  153366 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:03.422068  153366 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:03.422173  153366 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:03.535516  153366 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:03.535649  153366 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:03.535775  153366 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:03.550873  153366 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:03.552861  153366 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:03.552969  153366 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:03.553038  153366 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:03.553138  153366 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:03.553218  153366 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:03.553319  153366 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:03.553385  153366 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:03.553462  153366 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:03.553536  153366 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:03.553674  153366 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:03.553810  153366 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:03.553854  153366 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:03.553906  153366 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:03.650986  153366 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:03.737989  153366 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:03.981919  153366 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:04.322809  153366 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:04.378495  153366 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:04.379108  153366 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:04.382061  153366 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:00.753091  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:02.753181  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:04.384093  153366 out.go:235]   - Booting up control plane ...
	I0826 12:15:04.384215  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:04.384313  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:04.384401  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:04.405533  153366 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:04.411925  153366 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:04.411998  153366 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:04.548438  153366 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:04.548626  153366 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:05.049451  153366 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.31099ms
	I0826 12:15:05.049526  153366 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:05.253970  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:07.753555  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.051568  153366 kubeadm.go:310] [api-check] The API server is healthy after 5.001973036s
	I0826 12:15:10.066691  153366 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:10.086381  153366 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:10.122144  153366 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:10.122349  153366 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-697869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:10.138374  153366 kubeadm.go:310] [bootstrap-token] Using token: amrfa7.mjk6u0x9vle6unng
	I0826 12:15:10.139885  153366 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:10.140032  153366 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:10.156541  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:10.167826  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:10.174587  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:10.179100  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:10.191798  153366 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:10.465168  153366 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:10.905160  153366 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:11.461111  153366 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:11.461144  153366 kubeadm.go:310] 
	I0826 12:15:11.461234  153366 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:11.461246  153366 kubeadm.go:310] 
	I0826 12:15:11.461381  153366 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:11.461404  153366 kubeadm.go:310] 
	I0826 12:15:11.461439  153366 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:11.461530  153366 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:11.461655  153366 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:11.461667  153366 kubeadm.go:310] 
	I0826 12:15:11.461761  153366 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:11.461776  153366 kubeadm.go:310] 
	I0826 12:15:11.461841  153366 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:11.461855  153366 kubeadm.go:310] 
	I0826 12:15:11.461951  153366 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:11.462070  153366 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:11.462171  153366 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:11.462181  153366 kubeadm.go:310] 
	I0826 12:15:11.462305  153366 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:11.462432  153366 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:11.462443  153366 kubeadm.go:310] 
	I0826 12:15:11.462557  153366 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.462694  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:11.462729  153366 kubeadm.go:310] 	--control-plane 
	I0826 12:15:11.462742  153366 kubeadm.go:310] 
	I0826 12:15:11.462862  153366 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:11.462879  153366 kubeadm.go:310] 
	I0826 12:15:11.463004  153366 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.463151  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:11.463695  153366 kubeadm.go:310] W0826 12:15:03.397375    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464127  153366 kubeadm.go:310] W0826 12:15:03.398283    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464277  153366 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:11.464314  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:15:11.464324  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:11.467369  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:09.754135  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.247470  152463 pod_ready.go:82] duration metric: took 4m0.000930829s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	E0826 12:15:10.247510  152463 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:15:10.247531  152463 pod_ready.go:39] duration metric: took 4m13.959337221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:10.247571  152463 kubeadm.go:597] duration metric: took 4m20.649627423s to restartPrimaryControlPlane
	W0826 12:15:10.247641  152463 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:15:10.247671  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:15:11.468809  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:11.480030  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:11.503412  153366 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:11.503518  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:11.503558  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-697869 minikube.k8s.io/updated_at=2024_08_26T12_15_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=default-k8s-diff-port-697869 minikube.k8s.io/primary=true
	I0826 12:15:11.724406  153366 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:11.724524  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.225088  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.725598  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.225161  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.724619  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.225467  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.724756  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.224733  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.724555  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.869377  153366 kubeadm.go:1113] duration metric: took 4.365927713s to wait for elevateKubeSystemPrivileges
	I0826 12:15:15.869426  153366 kubeadm.go:394] duration metric: took 4m58.261516694s to StartCluster
	I0826 12:15:15.869450  153366 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.869547  153366 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:15.872248  153366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.872615  153366 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:15.872724  153366 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:15.872819  153366 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872837  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:15.872839  153366 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872858  153366 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872872  153366 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:15.872887  153366 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872908  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872919  153366 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872927  153366 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:15.872959  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872890  153366 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-697869"
	I0826 12:15:15.873361  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873403  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873418  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873465  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.874128  153366 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:15.875341  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:15.894326  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0826 12:15:15.894578  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0826 12:15:15.895050  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895104  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38885
	I0826 12:15:15.895131  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895609  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895629  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895612  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895658  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895696  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.896010  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896059  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896145  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.896164  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.896261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.896493  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896650  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.896675  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.896977  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.897022  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.899881  153366 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.899904  153366 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:15.899935  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.900218  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.900255  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.914959  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0826 12:15:15.915525  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.915993  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.916017  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.916418  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.916451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0826 12:15:15.916588  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.916681  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0826 12:15:15.916999  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.917629  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.917643  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.918129  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.918298  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.918337  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.919305  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.919920  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.919947  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.920096  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.920226  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.920281  153366 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:15.920702  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.920724  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.921464  153366 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:15.921468  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:15.921554  153366 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:15.921575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.923028  153366 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:15.923051  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:15.923072  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.926224  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926877  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926895  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926900  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.927101  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927141  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927313  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927329  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927509  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927677  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.927774  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.945639  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0826 12:15:15.946164  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.946704  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.946726  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.947148  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.947420  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.949257  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.949524  153366 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:15.949544  153366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:15.949573  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.952861  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953407  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.953440  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953604  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.953816  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.953971  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.954108  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:16.119775  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:16.141629  153366 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167775  153366 node_ready.go:49] node "default-k8s-diff-port-697869" has status "Ready":"True"
	I0826 12:15:16.167813  153366 node_ready.go:38] duration metric: took 26.141251ms for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167823  153366 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:16.174824  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:16.265371  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:16.273443  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:16.273479  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:16.295175  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:16.301027  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:16.301063  153366 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:16.351346  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:16.351372  153366 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:16.536263  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:17.254787  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254820  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.254872  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254896  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255317  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255371  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255394  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255396  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255397  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255354  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255412  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255447  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255425  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255497  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255721  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255735  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255839  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255860  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255883  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.279566  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.279589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.279893  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.279914  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792266  153366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255954534s)
	I0826 12:15:17.792329  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792341  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792687  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.792714  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792727  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792737  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792693  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.793052  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.793070  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.793083  153366 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-697869"
	I0826 12:15:17.795156  153366 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:15:17.796583  153366 addons.go:510] duration metric: took 1.923858399s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:15:18.183088  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.682427  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.903394  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:20.903620  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:21.684011  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.684037  153366 pod_ready.go:82] duration metric: took 5.509158352s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.684047  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689145  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.689170  153366 pod_ready.go:82] duration metric: took 5.117406ms for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689180  153366 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695856  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.695897  153366 pod_ready.go:82] duration metric: took 2.006709056s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695912  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700548  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.700572  153366 pod_ready.go:82] duration metric: took 4.650988ms for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700583  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705425  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.705449  153366 pod_ready.go:82] duration metric: took 4.857442ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705461  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710336  153366 pod_ready.go:93] pod "kube-proxy-fkklg" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.710368  153366 pod_ready.go:82] duration metric: took 4.897388ms for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710380  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079760  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:24.079791  153366 pod_ready.go:82] duration metric: took 369.402007ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079803  153366 pod_ready.go:39] duration metric: took 7.911968599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:24.079826  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:24.079905  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:24.096351  153366 api_server.go:72] duration metric: took 8.22368917s to wait for apiserver process to appear ...
	I0826 12:15:24.096380  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:24.096401  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:15:24.100636  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:15:24.102197  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:24.102228  153366 api_server.go:131] duration metric: took 5.839895ms to wait for apiserver health ...
	I0826 12:15:24.102239  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:24.282080  153366 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:24.282111  153366 system_pods.go:61] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.282116  153366 system_pods.go:61] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.282120  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.282124  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.282128  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.282131  153366 system_pods.go:61] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.282134  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.282141  153366 system_pods.go:61] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.282148  153366 system_pods.go:61] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.282160  153366 system_pods.go:74] duration metric: took 179.913782ms to wait for pod list to return data ...
	I0826 12:15:24.282174  153366 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:24.478697  153366 default_sa.go:45] found service account: "default"
	I0826 12:15:24.478725  153366 default_sa.go:55] duration metric: took 196.543227ms for default service account to be created ...
	I0826 12:15:24.478735  153366 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:24.681990  153366 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:24.682024  153366 system_pods.go:89] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.682033  153366 system_pods.go:89] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.682039  153366 system_pods.go:89] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.682047  153366 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.682053  153366 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.682059  153366 system_pods.go:89] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.682064  153366 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.682074  153366 system_pods.go:89] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.682084  153366 system_pods.go:89] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.682099  153366 system_pods.go:126] duration metric: took 203.358223ms to wait for k8s-apps to be running ...
	I0826 12:15:24.682112  153366 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:24.682176  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:24.696733  153366 system_svc.go:56] duration metric: took 14.61027ms WaitForService to wait for kubelet
	I0826 12:15:24.696763  153366 kubeadm.go:582] duration metric: took 8.824109304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:24.696783  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:24.879924  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:24.879956  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:24.879966  153366 node_conditions.go:105] duration metric: took 183.178992ms to run NodePressure ...
	I0826 12:15:24.879990  153366 start.go:241] waiting for startup goroutines ...
	I0826 12:15:24.879997  153366 start.go:246] waiting for cluster config update ...
	I0826 12:15:24.880010  153366 start.go:255] writing updated cluster config ...
	I0826 12:15:24.880311  153366 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:24.930941  153366 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:24.933196  153366 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-697869" cluster and "default" namespace by default
	I0826 12:15:36.323870  152463 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.076163509s)
	I0826 12:15:36.323965  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:36.347973  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:36.368968  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:36.382879  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:36.382903  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:36.382963  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:15:36.416659  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:36.416743  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:36.429514  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:15:36.451301  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:36.451385  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:36.462051  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.472004  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:36.472067  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.482273  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:15:36.492841  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:36.492912  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:36.504817  152463 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:36.551754  152463 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:36.551829  152463 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:36.672687  152463 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:36.672864  152463 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:36.672989  152463 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:36.683235  152463 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:36.685324  152463 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:36.685440  152463 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:36.685547  152463 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:36.685629  152463 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:36.685682  152463 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:36.685739  152463 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:36.685783  152463 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:36.685831  152463 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:36.686022  152463 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:36.686468  152463 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:36.686945  152463 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:36.687303  152463 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:36.687378  152463 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:36.967134  152463 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:37.077904  152463 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:37.371185  152463 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:37.555065  152463 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:37.634464  152463 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:37.634927  152463 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:37.638560  152463 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:37.640588  152463 out.go:235]   - Booting up control plane ...
	I0826 12:15:37.640726  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:37.640832  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:37.642937  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:37.662774  152463 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:37.672492  152463 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:37.672548  152463 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:37.813958  152463 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:37.814108  152463 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:38.316718  152463 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.741081ms
	I0826 12:15:38.316861  152463 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:43.318178  152463 kubeadm.go:310] [api-check] The API server is healthy after 5.001355764s
	I0826 12:15:43.331536  152463 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:43.349535  152463 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:43.387824  152463 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:43.388114  152463 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-956479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:43.405027  152463 kubeadm.go:310] [bootstrap-token] Using token: ukbhjp.blg8kbhpg1wwmixs
	I0826 12:15:43.406880  152463 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:43.407022  152463 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:43.422870  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:43.436842  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:43.444123  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:43.454773  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:43.467173  152463 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:43.727266  152463 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:44.155916  152463 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:44.726922  152463 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:44.727276  152463 kubeadm.go:310] 
	I0826 12:15:44.727355  152463 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:44.727366  152463 kubeadm.go:310] 
	I0826 12:15:44.727452  152463 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:44.727461  152463 kubeadm.go:310] 
	I0826 12:15:44.727501  152463 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:44.727596  152463 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:44.727678  152463 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:44.727692  152463 kubeadm.go:310] 
	I0826 12:15:44.727778  152463 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:44.727803  152463 kubeadm.go:310] 
	I0826 12:15:44.727880  152463 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:44.727890  152463 kubeadm.go:310] 
	I0826 12:15:44.727958  152463 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:44.728059  152463 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:44.728157  152463 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:44.728170  152463 kubeadm.go:310] 
	I0826 12:15:44.728278  152463 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:44.728381  152463 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:44.728390  152463 kubeadm.go:310] 
	I0826 12:15:44.728500  152463 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.728621  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:44.728650  152463 kubeadm.go:310] 	--control-plane 
	I0826 12:15:44.728655  152463 kubeadm.go:310] 
	I0826 12:15:44.728763  152463 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:44.728773  152463 kubeadm.go:310] 
	I0826 12:15:44.728879  152463 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.729000  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:44.730448  152463 kubeadm.go:310] W0826 12:15:36.526674    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730826  152463 kubeadm.go:310] W0826 12:15:36.527559    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730958  152463 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:44.730985  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:15:44.731006  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:44.732918  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:44.734123  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:44.746466  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:44.766371  152463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:44.766444  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:44.766500  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-956479 minikube.k8s.io/updated_at=2024_08_26T12_15_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=no-preload-956479 minikube.k8s.io/primary=true
	I0826 12:15:44.816160  152463 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:44.979504  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.479661  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.980448  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.479729  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.980060  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.479789  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.980142  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.479669  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.567890  152463 kubeadm.go:1113] duration metric: took 3.801513957s to wait for elevateKubeSystemPrivileges
	I0826 12:15:48.567928  152463 kubeadm.go:394] duration metric: took 4m59.024259276s to StartCluster
	I0826 12:15:48.567954  152463 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.568058  152463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:48.569638  152463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.569928  152463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:48.570009  152463 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:48.570072  152463 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956479"
	I0826 12:15:48.570106  152463 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956479"
	W0826 12:15:48.570120  152463 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:48.570111  152463 addons.go:69] Setting default-storageclass=true in profile "no-preload-956479"
	I0826 12:15:48.570136  152463 addons.go:69] Setting metrics-server=true in profile "no-preload-956479"
	I0826 12:15:48.570154  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570164  152463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956479"
	I0826 12:15:48.570168  152463 addons.go:234] Setting addon metrics-server=true in "no-preload-956479"
	W0826 12:15:48.570179  152463 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:48.570189  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:48.570209  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570485  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570551  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570575  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570609  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570621  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570654  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.572265  152463 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:48.573970  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:48.587085  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0826 12:15:48.587132  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0826 12:15:48.587291  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0826 12:15:48.587551  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.587597  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588312  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588331  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588376  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588491  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588509  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588696  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588878  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588965  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588978  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.589237  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589273  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589402  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589427  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589780  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.590142  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.593429  152463 addons.go:234] Setting addon default-storageclass=true in "no-preload-956479"
	W0826 12:15:48.593450  152463 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:48.593479  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.593765  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.593796  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.606920  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0826 12:15:48.607123  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0826 12:15:48.607641  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.607775  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.608233  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608253  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608389  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608401  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608881  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609068  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.609126  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609286  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.611449  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0826 12:15:48.611638  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612161  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612164  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.612932  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.612954  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.613327  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.613815  152463 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:48.614020  152463 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:48.614913  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.614969  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.615993  152463 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:48.616019  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:48.616035  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.616812  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:48.616831  152463 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:48.616854  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.619999  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.620553  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.620591  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.621629  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.621699  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621845  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.621868  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621914  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622126  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.622296  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.622459  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622662  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.622728  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.633310  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0826 12:15:48.633834  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.634438  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.634492  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.634892  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.635131  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.636967  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.637184  152463 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.637204  152463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:48.637225  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.640306  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.640677  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.640710  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.641042  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.641260  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.641483  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.641743  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.771258  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:48.788808  152463 node_ready.go:35] waiting up to 6m0s for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800881  152463 node_ready.go:49] node "no-preload-956479" has status "Ready":"True"
	I0826 12:15:48.800916  152463 node_ready.go:38] duration metric: took 12.068483ms for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800926  152463 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:48.806760  152463 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:48.859878  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:48.859902  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:48.863874  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.884910  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:48.884940  152463 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:48.905108  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.905139  152463 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:48.929466  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.968025  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:49.143607  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.143634  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.143980  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.144039  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144048  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144056  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.144063  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.144396  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144421  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144399  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177127  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.177157  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.177586  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177590  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.177610  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170421  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240899569s)
	I0826 12:15:50.170493  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170509  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.170879  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.170896  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.170919  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170934  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170947  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.171212  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.171232  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.171278  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.171294  152463 addons.go:475] Verifying addon metrics-server=true in "no-preload-956479"
	I0826 12:15:50.240347  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.272272683s)
	I0826 12:15:50.240403  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240416  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.240837  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.240861  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.240867  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.240871  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240906  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.241192  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.241208  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.243352  152463 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0826 12:15:50.244857  152463 addons.go:510] duration metric: took 1.674848626s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0826 12:15:50.821689  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:53.313148  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:54.313605  152463 pod_ready.go:93] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:54.313634  152463 pod_ready.go:82] duration metric: took 5.506845108s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:54.313646  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.320782  152463 pod_ready.go:103] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:56.822596  152463 pod_ready.go:93] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.822626  152463 pod_ready.go:82] duration metric: took 2.508972184s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.822652  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829472  152463 pod_ready.go:93] pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.829497  152463 pod_ready.go:82] duration metric: took 6.836827ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829508  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835063  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.835087  152463 pod_ready.go:82] duration metric: took 5.573211ms for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835095  152463 pod_ready.go:39] duration metric: took 8.03415934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:56.835111  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:56.835162  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:56.852565  152463 api_server.go:72] duration metric: took 8.282599518s to wait for apiserver process to appear ...
	I0826 12:15:56.852595  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:56.852614  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:15:56.857431  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:15:56.858525  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:56.858548  152463 api_server.go:131] duration metric: took 5.945927ms to wait for apiserver health ...
	I0826 12:15:56.858556  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:56.863726  152463 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:56.863750  152463 system_pods.go:61] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.863757  152463 system_pods.go:61] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.863762  152463 system_pods.go:61] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.863768  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.863773  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.863776  152463 system_pods.go:61] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.863780  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.863784  152463 system_pods.go:61] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.863788  152463 system_pods.go:61] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.863794  152463 system_pods.go:74] duration metric: took 5.233096ms to wait for pod list to return data ...
	I0826 12:15:56.863801  152463 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:56.866245  152463 default_sa.go:45] found service account: "default"
	I0826 12:15:56.866263  152463 default_sa.go:55] duration metric: took 2.456594ms for default service account to be created ...
	I0826 12:15:56.866270  152463 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:56.870592  152463 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:56.870614  152463 system_pods.go:89] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.870621  152463 system_pods.go:89] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.870626  152463 system_pods.go:89] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.870634  152463 system_pods.go:89] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.870640  152463 system_pods.go:89] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.870645  152463 system_pods.go:89] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.870656  152463 system_pods.go:89] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.870663  152463 system_pods.go:89] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.870673  152463 system_pods.go:89] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.870681  152463 system_pods.go:126] duration metric: took 4.405758ms to wait for k8s-apps to be running ...
	I0826 12:15:56.870688  152463 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:56.870736  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:56.886533  152463 system_svc.go:56] duration metric: took 15.833026ms WaitForService to wait for kubelet
	I0826 12:15:56.886582  152463 kubeadm.go:582] duration metric: took 8.316620619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:56.886607  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:56.895864  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:56.895902  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:56.895917  152463 node_conditions.go:105] duration metric: took 9.302123ms to run NodePressure ...
	I0826 12:15:56.895934  152463 start.go:241] waiting for startup goroutines ...
	I0826 12:15:56.895945  152463 start.go:246] waiting for cluster config update ...
	I0826 12:15:56.895960  152463 start.go:255] writing updated cluster config ...
	I0826 12:15:56.896336  152463 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:56.947198  152463 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:56.949119  152463 out.go:177] * Done! kubectl is now configured to use "no-preload-956479" cluster and "default" namespace by default
	I0826 12:16:00.905372  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:00.905692  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:00.905720  152982 kubeadm.go:310] 
	I0826 12:16:00.905753  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:16:00.905784  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:16:00.905791  152982 kubeadm.go:310] 
	I0826 12:16:00.905819  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:16:00.905877  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:16:00.906033  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:16:00.906050  152982 kubeadm.go:310] 
	I0826 12:16:00.906190  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:16:00.906257  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:16:00.906304  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:16:00.906311  152982 kubeadm.go:310] 
	I0826 12:16:00.906444  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:16:00.906687  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:16:00.906700  152982 kubeadm.go:310] 
	I0826 12:16:00.906794  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:16:00.906945  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:16:00.907050  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:16:00.907167  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:16:00.907184  152982 kubeadm.go:310] 
	I0826 12:16:00.907768  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:16:00.907869  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:16:00.907959  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 12:16:00.908103  152982 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 12:16:00.908168  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:16:01.392633  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:16:01.408303  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:16:01.419069  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:16:01.419104  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:16:01.419162  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:16:01.429440  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:16:01.429513  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:16:01.440092  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:16:01.450451  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:16:01.450528  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:16:01.461166  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.472084  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:16:01.472155  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.482791  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:16:01.493636  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:16:01.493737  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:16:01.504679  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:16:01.576700  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:16:01.576854  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:16:01.728501  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:16:01.728682  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:16:01.728846  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:16:01.928072  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:16:01.929877  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:16:01.929988  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:16:01.930128  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:16:01.930271  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:16:01.930373  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:16:01.930484  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:16:01.930593  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:16:01.930680  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:16:01.930766  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:16:01.931012  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:16:01.931363  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:16:01.931414  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:16:01.931593  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:16:02.054133  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:16:02.301995  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:16:02.372665  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:16:02.823940  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:16:02.844516  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:16:02.844641  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:16:02.844724  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:16:02.995838  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:16:02.997571  152982 out.go:235]   - Booting up control plane ...
	I0826 12:16:02.997707  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:16:02.999055  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:16:03.000691  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:16:03.010427  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:16:03.013494  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:16:43.016147  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:16:43.016271  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:43.016481  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:48.016709  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:48.016976  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:58.017776  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:58.018006  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:18.018369  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:18.018592  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.017759  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:58.018053  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.018084  152982 kubeadm.go:310] 
	I0826 12:17:58.018121  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:17:58.018157  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:17:58.018163  152982 kubeadm.go:310] 
	I0826 12:17:58.018192  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:17:58.018224  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:17:58.018310  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:17:58.018337  152982 kubeadm.go:310] 
	I0826 12:17:58.018477  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:17:58.018537  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:17:58.018619  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:17:58.018633  152982 kubeadm.go:310] 
	I0826 12:17:58.018723  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:17:58.018810  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:17:58.018820  152982 kubeadm.go:310] 
	I0826 12:17:58.019007  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:17:58.019157  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:17:58.019291  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:17:58.019403  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:17:58.019414  152982 kubeadm.go:310] 
	I0826 12:17:58.020426  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:17:58.020541  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:17:58.020627  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 12:17:58.020705  152982 kubeadm.go:394] duration metric: took 7m57.559327665s to StartCluster
	I0826 12:17:58.020799  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:17:58.020875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:17:58.061950  152982 cri.go:89] found id: ""
	I0826 12:17:58.061979  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.061989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:17:58.061998  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:17:58.062057  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:17:58.100419  152982 cri.go:89] found id: ""
	I0826 12:17:58.100451  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.100465  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:17:58.100474  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:17:58.100536  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:17:58.135329  152982 cri.go:89] found id: ""
	I0826 12:17:58.135360  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.135369  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:17:58.135378  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:17:58.135472  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:17:58.169826  152982 cri.go:89] found id: ""
	I0826 12:17:58.169858  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.169870  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:17:58.169888  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:17:58.169958  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:17:58.204549  152982 cri.go:89] found id: ""
	I0826 12:17:58.204583  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.204593  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:17:58.204600  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:17:58.204668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:17:58.241886  152982 cri.go:89] found id: ""
	I0826 12:17:58.241917  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.241926  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:17:58.241933  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:17:58.241997  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:17:58.276159  152982 cri.go:89] found id: ""
	I0826 12:17:58.276194  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.276206  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:17:58.276220  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:17:58.276288  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:17:58.311319  152982 cri.go:89] found id: ""
	I0826 12:17:58.311352  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.311364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:17:58.311377  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:17:58.311394  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:17:58.365300  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:17:58.365352  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:17:58.378933  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:17:58.378972  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:17:58.464890  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:17:58.464920  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:17:58.464939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:17:58.581032  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:17:58.581076  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0826 12:17:58.633835  152982 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 12:17:58.633919  152982 out.go:270] * 
	W0826 12:17:58.634025  152982 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.634049  152982 out.go:270] * 
	W0826 12:17:58.635201  152982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:17:58.639004  152982 out.go:201] 
	W0826 12:17:58.640230  152982 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.640308  152982 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 12:17:58.640327  152982 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 12:17:58.641876  152982 out.go:201] 
	
	
	==> CRI-O <==
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.117375942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675067117348322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62dcb163-a15f-4659-898c-833f095024ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.117972126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b575a83-cd83-46fa-88e1-3a80763e4aaf name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.118073248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b575a83-cd83-46fa-88e1-3a80763e4aaf name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.118288220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185,PodSandboxId:9efb1b4d46bb7eabcef58dd080fd3e1bba40da9d97296bb8e3a366507aacde86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674517831634941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3becb878-fd98-4476-9c05-cfb6260d2e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b,PodSandboxId:a65a74e8752e2679140bc4490f32b9df38757be45795b57c5c78052b9fa9ce9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517313724578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mg7dz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d15394d-faa4-4bee-a118-346247df5600,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23,PodSandboxId:61b09c1e488a319a0fece89f14a27f5ba4552925694384de467f27befbdc8473,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517069913117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9tm7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5aa79a64-1ea3-4734-99cf-70ea69b3fce3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654,PodSandboxId:c11c96971b2c6f283354e5f72eb50967311de67eba9efe0bd1314116595b49d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724674516508505500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkklg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337f5f37-fc3a-45fc-83f0-def91ba4c7af,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4,PodSandboxId:f0c55c67a268204fd48ba3a328cad0a76401ee476a4fff6f4e6b136e66095433,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674505570805116,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e31ae599fe347d3d9295fc494d8ea5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5,PodSandboxId:12f714b572f38470087dc20ebc18edfc101eceee6939579975531149bab5db83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674505603116638,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 927d6abd0aec67a446f5f2e98dd2b53d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386,PodSandboxId:31c5e141f3742343ca4623125655b50f462d58084c5d37c54403ba63cc8db8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674505514487399,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22,PodSandboxId:cda189c36b7ea2432f12a280c88fde5ff78ffbcd6d3ebb0540d2c7c47022b2e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674505448649126,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198cf46b0a0eb15961809ad9ae53f6d3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7,PodSandboxId:274fd81f46af534db23355a51ea573195b3cbd9f5db77e3f61033b1535ec3492,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674220010246935,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b575a83-cd83-46fa-88e1-3a80763e4aaf name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.155639356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e57c329e-5b6f-49ed-b131-eb196b0f9a20 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.155731402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e57c329e-5b6f-49ed-b131-eb196b0f9a20 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.157057641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9954d174-af13-4343-b19e-0fd3cde7568f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.157461489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675067157436581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9954d174-af13-4343-b19e-0fd3cde7568f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.157985107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d5bc93a-0835-4e42-9b04-e7bc302a73e1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.158075223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d5bc93a-0835-4e42-9b04-e7bc302a73e1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.158266688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185,PodSandboxId:9efb1b4d46bb7eabcef58dd080fd3e1bba40da9d97296bb8e3a366507aacde86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674517831634941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3becb878-fd98-4476-9c05-cfb6260d2e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b,PodSandboxId:a65a74e8752e2679140bc4490f32b9df38757be45795b57c5c78052b9fa9ce9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517313724578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mg7dz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d15394d-faa4-4bee-a118-346247df5600,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23,PodSandboxId:61b09c1e488a319a0fece89f14a27f5ba4552925694384de467f27befbdc8473,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517069913117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9tm7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5aa79a64-1ea3-4734-99cf-70ea69b3fce3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654,PodSandboxId:c11c96971b2c6f283354e5f72eb50967311de67eba9efe0bd1314116595b49d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724674516508505500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkklg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337f5f37-fc3a-45fc-83f0-def91ba4c7af,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4,PodSandboxId:f0c55c67a268204fd48ba3a328cad0a76401ee476a4fff6f4e6b136e66095433,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674505570805116,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e31ae599fe347d3d9295fc494d8ea5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5,PodSandboxId:12f714b572f38470087dc20ebc18edfc101eceee6939579975531149bab5db83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674505603116638,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 927d6abd0aec67a446f5f2e98dd2b53d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386,PodSandboxId:31c5e141f3742343ca4623125655b50f462d58084c5d37c54403ba63cc8db8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674505514487399,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22,PodSandboxId:cda189c36b7ea2432f12a280c88fde5ff78ffbcd6d3ebb0540d2c7c47022b2e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674505448649126,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198cf46b0a0eb15961809ad9ae53f6d3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7,PodSandboxId:274fd81f46af534db23355a51ea573195b3cbd9f5db77e3f61033b1535ec3492,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674220010246935,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d5bc93a-0835-4e42-9b04-e7bc302a73e1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.195121806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51f14334-23b8-47f1-9ff5-6a8da8158ce2 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.195198047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51f14334-23b8-47f1-9ff5-6a8da8158ce2 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.196303169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8277a51-966a-49c9-bb72-ebf07de9abfe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.196679132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675067196658606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8277a51-966a-49c9-bb72-ebf07de9abfe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.197186325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d9fd788-78a4-46ca-be69-a7e85d8382e0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.197235074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d9fd788-78a4-46ca-be69-a7e85d8382e0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.197414178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185,PodSandboxId:9efb1b4d46bb7eabcef58dd080fd3e1bba40da9d97296bb8e3a366507aacde86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674517831634941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3becb878-fd98-4476-9c05-cfb6260d2e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b,PodSandboxId:a65a74e8752e2679140bc4490f32b9df38757be45795b57c5c78052b9fa9ce9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517313724578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mg7dz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d15394d-faa4-4bee-a118-346247df5600,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23,PodSandboxId:61b09c1e488a319a0fece89f14a27f5ba4552925694384de467f27befbdc8473,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517069913117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9tm7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5aa79a64-1ea3-4734-99cf-70ea69b3fce3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654,PodSandboxId:c11c96971b2c6f283354e5f72eb50967311de67eba9efe0bd1314116595b49d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724674516508505500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkklg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337f5f37-fc3a-45fc-83f0-def91ba4c7af,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4,PodSandboxId:f0c55c67a268204fd48ba3a328cad0a76401ee476a4fff6f4e6b136e66095433,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674505570805116,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e31ae599fe347d3d9295fc494d8ea5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5,PodSandboxId:12f714b572f38470087dc20ebc18edfc101eceee6939579975531149bab5db83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674505603116638,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 927d6abd0aec67a446f5f2e98dd2b53d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386,PodSandboxId:31c5e141f3742343ca4623125655b50f462d58084c5d37c54403ba63cc8db8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674505514487399,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22,PodSandboxId:cda189c36b7ea2432f12a280c88fde5ff78ffbcd6d3ebb0540d2c7c47022b2e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674505448649126,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198cf46b0a0eb15961809ad9ae53f6d3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7,PodSandboxId:274fd81f46af534db23355a51ea573195b3cbd9f5db77e3f61033b1535ec3492,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674220010246935,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d9fd788-78a4-46ca-be69-a7e85d8382e0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.229738702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=745b355b-e75d-46e4-8c6c-521848edd17e name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.229820530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=745b355b-e75d-46e4-8c6c-521848edd17e name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.231547196Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0250743-4cf0-4295-b3ce-12d8ecba5ecf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.232173362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675067232149948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0250743-4cf0-4295-b3ce-12d8ecba5ecf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.232786677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce9529e8-2f8c-4079-9d03-52605640152e name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.232842488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce9529e8-2f8c-4079-9d03-52605640152e name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:27 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:24:27.233100761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185,PodSandboxId:9efb1b4d46bb7eabcef58dd080fd3e1bba40da9d97296bb8e3a366507aacde86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674517831634941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3becb878-fd98-4476-9c05-cfb6260d2e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b,PodSandboxId:a65a74e8752e2679140bc4490f32b9df38757be45795b57c5c78052b9fa9ce9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517313724578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mg7dz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d15394d-faa4-4bee-a118-346247df5600,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23,PodSandboxId:61b09c1e488a319a0fece89f14a27f5ba4552925694384de467f27befbdc8473,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517069913117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9tm7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5aa79a64-1ea3-4734-99cf-70ea69b3fce3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654,PodSandboxId:c11c96971b2c6f283354e5f72eb50967311de67eba9efe0bd1314116595b49d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724674516508505500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkklg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337f5f37-fc3a-45fc-83f0-def91ba4c7af,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4,PodSandboxId:f0c55c67a268204fd48ba3a328cad0a76401ee476a4fff6f4e6b136e66095433,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674505570805116,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e31ae599fe347d3d9295fc494d8ea5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5,PodSandboxId:12f714b572f38470087dc20ebc18edfc101eceee6939579975531149bab5db83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674505603116638,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 927d6abd0aec67a446f5f2e98dd2b53d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386,PodSandboxId:31c5e141f3742343ca4623125655b50f462d58084c5d37c54403ba63cc8db8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674505514487399,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22,PodSandboxId:cda189c36b7ea2432f12a280c88fde5ff78ffbcd6d3ebb0540d2c7c47022b2e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674505448649126,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198cf46b0a0eb15961809ad9ae53f6d3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7,PodSandboxId:274fd81f46af534db23355a51ea573195b3cbd9f5db77e3f61033b1535ec3492,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674220010246935,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce9529e8-2f8c-4079-9d03-52605640152e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	270d1832bad4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   9efb1b4d46bb7       storage-provisioner
	cdb2469bb6273       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a65a74e8752e2       coredns-6f6b679f8f-mg7dz
	150f52d25ef12       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   61b09c1e488a3       coredns-6f6b679f8f-9tm7v
	db02b9eeafe0b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   c11c96971b2c6       kube-proxy-fkklg
	e74ae7c401295       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   12f714b572f38       kube-controller-manager-default-k8s-diff-port-697869
	e5e6f98951857       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   f0c55c67a2682       etcd-default-k8s-diff-port-697869
	14a06fb6265b2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   31c5e141f3742       kube-apiserver-default-k8s-diff-port-697869
	db6eabb03fe18       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   cda189c36b7ea       kube-scheduler-default-k8s-diff-port-697869
	8a8ee2b12fd33       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   274fd81f46af5       kube-apiserver-default-k8s-diff-port-697869
	
	
	==> coredns [150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-697869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-697869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=default-k8s-diff-port-697869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T12_15_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 12:15:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-697869
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:24:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:20:27 +0000   Mon, 26 Aug 2024 12:15:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:20:27 +0000   Mon, 26 Aug 2024 12:15:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:20:27 +0000   Mon, 26 Aug 2024 12:15:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:20:27 +0000   Mon, 26 Aug 2024 12:15:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.11
	  Hostname:    default-k8s-diff-port-697869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8ef518bda4d40419144f742d287dfbe
	  System UUID:                a8ef518b-da4d-4041-9144-f742d287dfbe
	  Boot ID:                    530fedb0-7883-43c7-9333-889ed0d8b04a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-9tm7v                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-6f6b679f8f-mg7dz                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-default-k8s-diff-port-697869                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-697869             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-697869    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-fkklg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-697869             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-7d2qs                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-697869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-697869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-697869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-697869 event: Registered Node default-k8s-diff-port-697869 in Controller
	
	
	==> dmesg <==
	[  +0.041881] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug26 12:10] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.995076] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.561352] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.047844] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.060144] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059269] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.188746] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.147666] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.301793] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.358766] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.064825] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.871859] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +4.563424] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.324050] kauditd_printk_skb: 59 callbacks suppressed
	[Aug26 12:14] kauditd_printk_skb: 31 callbacks suppressed
	[Aug26 12:15] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.412279] systemd-fstab-generator[2554]: Ignoring "noauto" option for root device
	[  +4.479661] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.583979] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +5.438837] systemd-fstab-generator[3006]: Ignoring "noauto" option for root device
	[  +0.145032] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.327567] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4] <==
	{"level":"info","ts":"2024-08-26T12:15:05.961688Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T12:15:05.964120Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.11:2380"}
	{"level":"info","ts":"2024-08-26T12:15:05.964280Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.11:2380"}
	{"level":"info","ts":"2024-08-26T12:15:05.965507Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"2895711bae57da21","initial-advertise-peer-urls":["https://192.168.61.11:2380"],"listen-peer-urls":["https://192.168.61.11:2380"],"advertise-client-urls":["https://192.168.61.11:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.11:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T12:15:05.965823Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T12:15:06.292874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-26T12:15:06.292948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-26T12:15:06.292965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 received MsgPreVoteResp from 2895711bae57da21 at term 1"}
	{"level":"info","ts":"2024-08-26T12:15:06.292987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 became candidate at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:06.292992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 received MsgVoteResp from 2895711bae57da21 at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:06.293000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 became leader at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:06.293007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2895711bae57da21 elected leader 2895711bae57da21 at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:06.295199Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:06.296156Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2895711bae57da21","local-member-attributes":"{Name:default-k8s-diff-port-697869 ClientURLs:[https://192.168.61.11:2379]}","request-path":"/0/members/2895711bae57da21/attributes","cluster-id":"fb6e72b45dde42f9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T12:15:06.296183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:15:06.296994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:15:06.298091Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:15:06.298952Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T12:15:06.299266Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T12:15:06.299286Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T12:15:06.300209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:15:06.301182Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb6e72b45dde42f9","local-member-id":"2895711bae57da21","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:06.305666Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:06.305716Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:06.308283Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.11:2379"}
	
	
	==> kernel <==
	 12:24:27 up 14 min,  0 users,  load average: 0.26, 0.25, 0.15
	Linux default-k8s-diff-port-697869 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0826 12:20:09.190072       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:20:09.190122       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0826 12:20:09.191082       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:20:09.191193       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:21:09.191807       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:21:09.192121       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0826 12:21:09.191808       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:21:09.192244       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:21:09.194088       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:21:09.194126       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:23:09.195285       1 handler_proxy.go:99] no RequestInfo found in the context
	W0826 12:23:09.195304       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:23:09.195889       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0826 12:23:09.195958       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:23:09.197082       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:23:09.197126       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7] <==
	W0826 12:14:59.975893       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.975986       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.979555       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.988357       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.998127       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.998214       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.998476       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.037934       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.039403       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.039417       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.073418       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.084629       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.094405       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.101158       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.113258       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.195492       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.223868       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.262453       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.272216       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.382654       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.389272       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.389577       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.491965       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.594453       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.664350       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5] <==
	E0826 12:19:15.161627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:19:15.604400       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:19:45.168760       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:19:45.613067       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:20:15.177882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:20:15.621778       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:20:27.830448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-697869"
	E0826 12:20:45.189339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:20:45.631201       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:21:15.196407       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:21:15.639155       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:21:26.820345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.361151ms"
	I0826 12:21:40.818766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.094µs"
	E0826 12:21:45.203674       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:21:45.648559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:22:15.215175       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:22:15.658337       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:22:45.222453       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:22:45.667258       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:23:15.228934       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:23:15.675962       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:23:45.235143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:23:45.684182       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:24:15.241080       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:24:15.691343       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 12:15:17.061137       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 12:15:17.075555       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.11"]
	E0826 12:15:17.075631       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 12:15:17.302425       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 12:15:17.302496       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 12:15:17.302542       1 server_linux.go:169] "Using iptables Proxier"
	I0826 12:15:17.307274       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 12:15:17.307604       1 server.go:483] "Version info" version="v1.31.0"
	I0826 12:15:17.307626       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:15:17.309334       1 config.go:197] "Starting service config controller"
	I0826 12:15:17.309360       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 12:15:17.309389       1 config.go:104] "Starting endpoint slice config controller"
	I0826 12:15:17.309393       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 12:15:17.330223       1 config.go:326] "Starting node config controller"
	I0826 12:15:17.330294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 12:15:17.411848       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 12:15:17.412314       1 shared_informer.go:320] Caches are synced for service config
	I0826 12:15:17.430964       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22] <==
	W0826 12:15:09.049948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:09.050006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.057078       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 12:15:09.057117       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 12:15:09.086218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 12:15:09.086276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.094610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0826 12:15:09.094656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.121636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:09.121689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.222400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 12:15:09.222456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.285634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 12:15:09.286382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.310195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0826 12:15:09.310257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.444300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:09.444365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.548597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:09.548647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.609179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 12:15:09.609239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.610708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 12:15:09.610774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0826 12:15:11.512940       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:23:14 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:14.799999    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:23:20 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:20.951921    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675000948364573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:20 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:20.952508    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675000948364573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:27 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:27.800374    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:23:30 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:30.955620    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675010953909826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:30 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:30.955693    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675010953909826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:40 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:40.958382    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675020958083299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:40 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:40.958421    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675020958083299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:42 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:42.800194    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:23:50 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:50.959870    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675030959600351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:50 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:50.960293    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675030959600351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:53 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:23:53.800574    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:24:00 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:00.961789    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675040961414976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:00 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:00.962250    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675040961414976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:08 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:08.800487    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:24:10 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:10.810298    2881 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 12:24:10 default-k8s-diff-port-697869 kubelet[2881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 12:24:10 default-k8s-diff-port-697869 kubelet[2881]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 12:24:10 default-k8s-diff-port-697869 kubelet[2881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 12:24:10 default-k8s-diff-port-697869 kubelet[2881]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 12:24:10 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:10.968232    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675050967884425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:10 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:10.968259    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675050967884425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:20 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:20.971082    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675060970227671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:20 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:20.972506    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675060970227671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:23 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:24:23.799900    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	
	
	==> storage-provisioner [270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185] <==
	I0826 12:15:17.975676       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 12:15:18.031766       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 12:15:18.031849       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 12:15:18.054506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 12:15:18.054675       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-697869_656b91ad-0335-4727-8ce1-96984fc792ed!
	I0826 12:15:18.054774       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e92b5516-2d40-428d-bcd3-b1afcc4daa01", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-697869_656b91ad-0335-4727-8ce1-96984fc792ed became leader
	I0826 12:15:18.156329       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-697869_656b91ad-0335-4727-8ce1-96984fc792ed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-697869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7d2qs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-697869 describe pod metrics-server-6867b74b74-7d2qs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-697869 describe pod metrics-server-6867b74b74-7d2qs: exit status 1 (68.009586ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7d2qs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-697869 describe pod metrics-server-6867b74b74-7d2qs: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0826 12:17:20.477176  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956479 -n no-preload-956479
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-26 12:24:57.489251567 +0000 UTC m=+5902.814917290
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-956479 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-956479 logs -n 25: (2.221526818s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-585941                                        | pause-585941                 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:01 UTC | 26 Aug 24 12:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956479             | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-923586            | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148783 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	|         | disable-driver-mounts-148783                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:04 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-839656        | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-697869  | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956479                  | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-923586                 | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-839656             | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697869       | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC | 26 Aug 24 12:15 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:06:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:06:55.804794  153366 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:06:55.805114  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805125  153366 out.go:358] Setting ErrFile to fd 2...
	I0826 12:06:55.805129  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805378  153366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:06:55.806009  153366 out.go:352] Setting JSON to false
	I0826 12:06:55.806989  153366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6557,"bootTime":1724667459,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:06:55.807056  153366 start.go:139] virtualization: kvm guest
	I0826 12:06:55.809200  153366 out.go:177] * [default-k8s-diff-port-697869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:06:55.810757  153366 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:06:55.810779  153366 notify.go:220] Checking for updates...
	I0826 12:06:55.813352  153366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:06:55.814876  153366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:06:55.816231  153366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:06:55.817536  153366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:06:55.819049  153366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:06:55.820974  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:06:55.821368  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.821428  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.837973  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0826 12:06:55.838484  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.839113  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.839132  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.839537  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.839758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.840059  153366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:06:55.840392  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.840446  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.855990  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0826 12:06:55.856535  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.857044  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.857070  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.857398  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.857606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.892165  153366 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:06:55.893462  153366 start.go:297] selected driver: kvm2
	I0826 12:06:55.893491  153366 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.893612  153366 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:06:55.894295  153366 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.894372  153366 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:06:55.911403  153366 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:06:55.911782  153366 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:06:55.911825  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:06:55.911833  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:06:55.911942  153366 start.go:340] cluster config:
	{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.912047  153366 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.914819  153366 out.go:177] * Starting "default-k8s-diff-port-697869" primary control-plane node in "default-k8s-diff-port-697869" cluster
	I0826 12:06:58.095139  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:06:55.916120  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:06:55.916158  153366 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:06:55.916168  153366 cache.go:56] Caching tarball of preloaded images
	I0826 12:06:55.916249  153366 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:06:55.916260  153366 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:06:55.916361  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:06:55.916578  153366 start.go:360] acquireMachinesLock for default-k8s-diff-port-697869: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:07:01.167159  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:07.247157  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:10.319093  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:16.399177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:19.471168  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:25.551154  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:28.623156  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:34.703152  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:37.775237  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:43.855164  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:46.927177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:53.007138  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:56.079172  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:02.159134  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:05.231114  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:11.311126  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:14.383170  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:20.463130  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:23.535190  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:29.615145  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:32.687246  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:38.767150  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:41.839214  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:47.919149  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:50.991177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:57.071142  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:00.143127  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:06.223158  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:09.295167  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:12.299677  152550 start.go:364] duration metric: took 4m34.363707329s to acquireMachinesLock for "embed-certs-923586"
	I0826 12:09:12.299740  152550 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:12.299746  152550 fix.go:54] fixHost starting: 
	I0826 12:09:12.300074  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:12.300107  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:12.316195  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0826 12:09:12.316679  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:12.317193  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:09:12.317222  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:12.317544  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:12.317738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:12.317890  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:09:12.319718  152550 fix.go:112] recreateIfNeeded on embed-certs-923586: state=Stopped err=<nil>
	I0826 12:09:12.319757  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	W0826 12:09:12.319928  152550 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:12.322756  152550 out.go:177] * Restarting existing kvm2 VM for "embed-certs-923586" ...
	I0826 12:09:12.324242  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Start
	I0826 12:09:12.324436  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring networks are active...
	I0826 12:09:12.325340  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network default is active
	I0826 12:09:12.325727  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network mk-embed-certs-923586 is active
	I0826 12:09:12.326016  152550 main.go:141] libmachine: (embed-certs-923586) Getting domain xml...
	I0826 12:09:12.326704  152550 main.go:141] libmachine: (embed-certs-923586) Creating domain...
	I0826 12:09:12.297008  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:12.297049  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297404  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:09:12.297433  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297769  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:09:12.299520  152463 machine.go:96] duration metric: took 4m37.402469334s to provisionDockerMachine
	I0826 12:09:12.299563  152463 fix.go:56] duration metric: took 4m37.426061512s for fixHost
	I0826 12:09:12.299570  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 4m37.426083406s
	W0826 12:09:12.299602  152463 start.go:714] error starting host: provision: host is not running
	W0826 12:09:12.299700  152463 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0826 12:09:12.299714  152463 start.go:729] Will try again in 5 seconds ...
	I0826 12:09:13.587774  152550 main.go:141] libmachine: (embed-certs-923586) Waiting to get IP...
	I0826 12:09:13.588804  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.589502  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.589606  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.589472  153863 retry.go:31] will retry after 233.612197ms: waiting for machine to come up
	I0826 12:09:13.825289  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.825694  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.825716  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.825640  153863 retry.go:31] will retry after 278.757003ms: waiting for machine to come up
	I0826 12:09:14.106215  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.106555  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.106604  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.106513  153863 retry.go:31] will retry after 438.455545ms: waiting for machine to come up
	I0826 12:09:14.546036  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.546434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.546461  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.546390  153863 retry.go:31] will retry after 471.25312ms: waiting for machine to come up
	I0826 12:09:15.019018  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.019413  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.019441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.019398  153863 retry.go:31] will retry after 547.251596ms: waiting for machine to come up
	I0826 12:09:15.568156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.568417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.568446  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.568366  153863 retry.go:31] will retry after 602.422279ms: waiting for machine to come up
	I0826 12:09:16.172056  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:16.172588  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:16.172613  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:16.172520  153863 retry.go:31] will retry after 990.562884ms: waiting for machine to come up
	I0826 12:09:17.164920  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:17.165417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:17.165441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:17.165361  153863 retry.go:31] will retry after 1.291254906s: waiting for machine to come up
	I0826 12:09:17.301413  152463 start.go:360] acquireMachinesLock for no-preload-956479: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:09:18.458402  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:18.458881  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:18.458913  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:18.458796  153863 retry.go:31] will retry after 1.757955514s: waiting for machine to come up
	I0826 12:09:20.218876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:20.219306  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:20.219329  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:20.219276  153863 retry.go:31] will retry after 1.629705685s: waiting for machine to come up
	I0826 12:09:21.850442  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:21.850858  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:21.850889  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:21.850800  153863 retry.go:31] will retry after 2.281035685s: waiting for machine to come up
	I0826 12:09:24.133867  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:24.134245  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:24.134273  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:24.134193  153863 retry.go:31] will retry after 3.498910639s: waiting for machine to come up
	I0826 12:09:27.635304  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:27.635727  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:27.635762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:27.635665  153863 retry.go:31] will retry after 3.250723751s: waiting for machine to come up
	I0826 12:09:32.191598  152982 start.go:364] duration metric: took 3m50.364189217s to acquireMachinesLock for "old-k8s-version-839656"
	I0826 12:09:32.191690  152982 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:32.191702  152982 fix.go:54] fixHost starting: 
	I0826 12:09:32.192120  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:32.192160  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:32.209470  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0826 12:09:32.209924  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:32.210423  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:09:32.210446  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:32.210781  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:32.210982  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:32.211153  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetState
	I0826 12:09:32.212801  152982 fix.go:112] recreateIfNeeded on old-k8s-version-839656: state=Stopped err=<nil>
	I0826 12:09:32.212839  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	W0826 12:09:32.213022  152982 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:32.215081  152982 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-839656" ...
	I0826 12:09:30.890060  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890595  152550 main.go:141] libmachine: (embed-certs-923586) Found IP for machine: 192.168.39.6
	I0826 12:09:30.890628  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has current primary IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890642  152550 main.go:141] libmachine: (embed-certs-923586) Reserving static IP address...
	I0826 12:09:30.891114  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.891138  152550 main.go:141] libmachine: (embed-certs-923586) DBG | skip adding static IP to network mk-embed-certs-923586 - found existing host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"}
	I0826 12:09:30.891148  152550 main.go:141] libmachine: (embed-certs-923586) Reserved static IP address: 192.168.39.6
	I0826 12:09:30.891160  152550 main.go:141] libmachine: (embed-certs-923586) Waiting for SSH to be available...
	I0826 12:09:30.891171  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Getting to WaitForSSH function...
	I0826 12:09:30.893189  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893470  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.893500  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893616  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH client type: external
	I0826 12:09:30.893640  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa (-rw-------)
	I0826 12:09:30.893682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:30.893696  152550 main.go:141] libmachine: (embed-certs-923586) DBG | About to run SSH command:
	I0826 12:09:30.893714  152550 main.go:141] libmachine: (embed-certs-923586) DBG | exit 0
	I0826 12:09:31.014809  152550 main.go:141] libmachine: (embed-certs-923586) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:31.015188  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetConfigRaw
	I0826 12:09:31.015829  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.018458  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.018812  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.018855  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.019100  152550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/config.json ...
	I0826 12:09:31.019329  152550 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:31.019348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.019561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.021826  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022132  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.022156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.022460  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022622  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022733  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.022906  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.023108  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.023121  152550 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:31.123039  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:31.123080  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123410  152550 buildroot.go:166] provisioning hostname "embed-certs-923586"
	I0826 12:09:31.123443  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.126455  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126777  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.126814  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126922  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.127161  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127351  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127522  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.127719  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.127909  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.127924  152550 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-923586 && echo "embed-certs-923586" | sudo tee /etc/hostname
	I0826 12:09:31.240946  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-923586
	
	I0826 12:09:31.240981  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.243695  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244041  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.244079  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244240  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.244453  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244617  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244742  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.244900  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.245095  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.245113  152550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-923586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-923586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-923586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:31.355875  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:31.355909  152550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:31.355933  152550 buildroot.go:174] setting up certificates
	I0826 12:09:31.355947  152550 provision.go:84] configureAuth start
	I0826 12:09:31.355960  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.356300  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.359092  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.359407  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359596  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.362078  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362396  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.362429  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362538  152550 provision.go:143] copyHostCerts
	I0826 12:09:31.362632  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:31.362656  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:31.362743  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:31.362888  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:31.362900  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:31.362939  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:31.363021  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:31.363031  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:31.363065  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:31.363135  152550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.embed-certs-923586 san=[127.0.0.1 192.168.39.6 embed-certs-923586 localhost minikube]
	I0826 12:09:31.549410  152550 provision.go:177] copyRemoteCerts
	I0826 12:09:31.549482  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:31.549517  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.552293  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552647  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.552681  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552914  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.553119  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.553276  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.553416  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:31.633032  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:31.657117  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:09:31.680707  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:31.703441  152550 provision.go:87] duration metric: took 347.478825ms to configureAuth
	I0826 12:09:31.703477  152550 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:31.703678  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:09:31.703752  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.706384  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.706876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.706909  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.707110  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.707364  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707762  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.708005  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.708232  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.708252  152550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:31.963380  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:31.963417  152550 machine.go:96] duration metric: took 944.071305ms to provisionDockerMachine
	I0826 12:09:31.963435  152550 start.go:293] postStartSetup for "embed-certs-923586" (driver="kvm2")
	I0826 12:09:31.963452  152550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:31.963481  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.963878  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:31.963913  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.966558  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.966981  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.967010  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.967186  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.967413  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.967587  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.967732  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.049232  152550 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:32.053165  152550 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:32.053195  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:32.053278  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:32.053378  152550 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:32.053495  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:32.062420  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:32.085277  152550 start.go:296] duration metric: took 121.824784ms for postStartSetup
	I0826 12:09:32.085335  152550 fix.go:56] duration metric: took 19.785587858s for fixHost
	I0826 12:09:32.085362  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.088039  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088332  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.088360  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088560  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.088832  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089012  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089191  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.089365  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:32.089529  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:32.089539  152550 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:32.191413  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674172.168471460
	
	I0826 12:09:32.191440  152550 fix.go:216] guest clock: 1724674172.168471460
	I0826 12:09:32.191450  152550 fix.go:229] Guest: 2024-08-26 12:09:32.16847146 +0000 UTC Remote: 2024-08-26 12:09:32.085340981 +0000 UTC m=+294.301169364 (delta=83.130479ms)
	I0826 12:09:32.191485  152550 fix.go:200] guest clock delta is within tolerance: 83.130479ms
	I0826 12:09:32.191493  152550 start.go:83] releasing machines lock for "embed-certs-923586", held for 19.891774014s
	I0826 12:09:32.191526  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.191861  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:32.194589  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.194980  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.195019  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.195207  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.195866  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196071  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196167  152550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:32.196288  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.196319  152550 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:32.196348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.199088  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199546  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.199598  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199776  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.199977  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200105  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.200124  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.200148  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200317  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.200367  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.200482  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200663  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200824  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.285244  152550 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:32.317027  152550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:32.466233  152550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:32.472677  152550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:32.472768  152550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:32.490080  152550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:32.490111  152550 start.go:495] detecting cgroup driver to use...
	I0826 12:09:32.490189  152550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:32.509031  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:32.524361  152550 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:32.524417  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:32.539259  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:32.553276  152550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:32.676018  152550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:32.833702  152550 docker.go:233] disabling docker service ...
	I0826 12:09:32.833779  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:32.851253  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:32.865578  152550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:33.000922  152550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:33.129916  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:33.144209  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:33.162946  152550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:09:33.163010  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.174271  152550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:33.174360  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.189085  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.204388  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.218151  152550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:33.234931  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.257016  152550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.280905  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.293033  152550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:33.303161  152550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:33.303235  152550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:33.316560  152550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:33.326319  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:33.449279  152550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:33.587642  152550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:33.587722  152550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:33.592423  152550 start.go:563] Will wait 60s for crictl version
	I0826 12:09:33.592495  152550 ssh_runner.go:195] Run: which crictl
	I0826 12:09:33.596628  152550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:33.633109  152550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:33.633225  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.661128  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.692222  152550 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:09:32.216396  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .Start
	I0826 12:09:32.216630  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring networks are active...
	I0826 12:09:32.217414  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network default is active
	I0826 12:09:32.217851  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network mk-old-k8s-version-839656 is active
	I0826 12:09:32.218286  152982 main.go:141] libmachine: (old-k8s-version-839656) Getting domain xml...
	I0826 12:09:32.219128  152982 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 12:09:33.500501  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting to get IP...
	I0826 12:09:33.501678  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.502100  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.502202  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.502072  154009 retry.go:31] will retry after 193.282008ms: waiting for machine to come up
	I0826 12:09:33.697223  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.697688  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.697760  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.697669  154009 retry.go:31] will retry after 252.110347ms: waiting for machine to come up
	I0826 12:09:33.951330  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.952639  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.952677  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.952616  154009 retry.go:31] will retry after 436.954293ms: waiting for machine to come up
	I0826 12:09:34.391109  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.391724  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.391759  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.391676  154009 retry.go:31] will retry after 402.13367ms: waiting for machine to come up
	I0826 12:09:34.795471  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.796036  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.796060  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.795991  154009 retry.go:31] will retry after 738.867168ms: waiting for machine to come up
	I0826 12:09:35.537041  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:35.537518  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:35.537539  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:35.537476  154009 retry.go:31] will retry after 884.001928ms: waiting for machine to come up
	I0826 12:09:36.423984  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:36.424400  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:36.424432  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:36.424336  154009 retry.go:31] will retry after 958.887984ms: waiting for machine to come up
	I0826 12:09:33.693650  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:33.696950  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:33.697385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697661  152550 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:33.701975  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:33.715404  152550 kubeadm.go:883] updating cluster {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:33.715541  152550 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:09:33.715646  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:33.756477  152550 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:09:33.756546  152550 ssh_runner.go:195] Run: which lz4
	I0826 12:09:33.761027  152550 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:33.765139  152550 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:33.765181  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:09:35.106552  152550 crio.go:462] duration metric: took 1.345552742s to copy over tarball
	I0826 12:09:35.106656  152550 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:37.299491  152550 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.192805053s)
	I0826 12:09:37.299548  152550 crio.go:469] duration metric: took 2.192938832s to extract the tarball
	I0826 12:09:37.299560  152550 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:37.337654  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:37.378117  152550 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:09:37.378144  152550 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:09:37.378155  152550 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0826 12:09:37.378276  152550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-923586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:37.378375  152550 ssh_runner.go:195] Run: crio config
	I0826 12:09:37.438148  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:37.438182  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:37.438200  152550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:37.438229  152550 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-923586 NodeName:embed-certs-923586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:09:37.438436  152550 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-923586"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:37.438525  152550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:09:37.451742  152550 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:37.451824  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:37.463078  152550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0826 12:09:37.481563  152550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:37.499615  152550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0826 12:09:37.518753  152550 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:37.523612  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:37.535774  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:37.664131  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:37.681227  152550 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586 for IP: 192.168.39.6
	I0826 12:09:37.681254  152550 certs.go:194] generating shared ca certs ...
	I0826 12:09:37.681293  152550 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:37.681467  152550 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:37.681529  152550 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:37.681542  152550 certs.go:256] generating profile certs ...
	I0826 12:09:37.681665  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/client.key
	I0826 12:09:37.681751  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key.f0cd25f6
	I0826 12:09:37.681813  152550 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key
	I0826 12:09:37.681967  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:37.682018  152550 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:37.682029  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:37.682064  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:37.682100  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:37.682136  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:37.682199  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:37.683214  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:37.721802  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:37.756110  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:09:37.786038  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:09:37.818026  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0826 12:09:37.385261  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:37.385737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:37.385767  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:37.385679  154009 retry.go:31] will retry after 991.322442ms: waiting for machine to come up
	I0826 12:09:38.379002  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:38.379428  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:38.379457  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:38.379382  154009 retry.go:31] will retry after 1.199531339s: waiting for machine to come up
	I0826 12:09:39.581068  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:39.581551  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:39.581581  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:39.581506  154009 retry.go:31] will retry after 1.74680502s: waiting for machine to come up
	I0826 12:09:41.330775  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:41.331224  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:41.331254  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:41.331170  154009 retry.go:31] will retry after 2.648889988s: waiting for machine to come up
	I0826 12:09:37.843982  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:09:37.869902  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:09:37.893757  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:09:37.917320  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:09:37.940492  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:09:37.964211  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:09:37.987907  152550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:09:38.004414  152550 ssh_runner.go:195] Run: openssl version
	I0826 12:09:38.010144  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:09:38.020820  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025245  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025324  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.031174  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:09:38.041847  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:09:38.052764  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057501  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057591  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.063840  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:09:38.075173  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:09:38.085770  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089921  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089986  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.095373  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:09:38.105709  152550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:09:38.110189  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:09:38.115952  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:09:38.121463  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:09:38.127423  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:09:38.132968  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:09:38.138735  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:09:38.144517  152550 kubeadm.go:392] StartCluster: {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:09:38.144671  152550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:09:38.144748  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.179325  152550 cri.go:89] found id: ""
	I0826 12:09:38.179409  152550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:09:38.189261  152550 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:09:38.189296  152550 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:09:38.189368  152550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:09:38.198923  152550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:09:38.200065  152550 kubeconfig.go:125] found "embed-certs-923586" server: "https://192.168.39.6:8443"
	I0826 12:09:38.202145  152550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:09:38.211371  152550 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.6
	I0826 12:09:38.211415  152550 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:09:38.211431  152550 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:09:38.211501  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.245861  152550 cri.go:89] found id: ""
	I0826 12:09:38.245945  152550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:09:38.262469  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:09:38.272693  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:09:38.272721  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:09:38.272780  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:09:38.281704  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:09:38.281779  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:09:38.291042  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:09:38.299990  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:09:38.300057  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:09:38.309982  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.319474  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:09:38.319536  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.329345  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:09:38.338548  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:09:38.338649  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:09:38.349124  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:09:38.359112  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:38.470240  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.758142  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.28785788s)
	I0826 12:09:39.758180  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.973482  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.044459  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.143679  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:09:40.143844  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:40.644217  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.144357  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.160970  152550 api_server.go:72] duration metric: took 1.017300298s to wait for apiserver process to appear ...
	I0826 12:09:41.161005  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:09:41.161032  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.548928  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.548971  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.548988  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.580924  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.580991  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.661191  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.667248  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:43.667278  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.161959  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.177173  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.177216  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.661798  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.668406  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.668456  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:45.162005  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:45.168111  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:09:45.174487  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:09:45.174525  152550 api_server.go:131] duration metric: took 4.013513808s to wait for apiserver health ...
	I0826 12:09:45.174536  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:45.174543  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:45.176809  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:09:43.982234  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:43.982681  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:43.982714  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:43.982593  154009 retry.go:31] will retry after 2.916473093s: waiting for machine to come up
	I0826 12:09:45.178235  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:09:45.189704  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:09:45.250046  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:09:45.262420  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:09:45.262460  152550 system_pods.go:61] "coredns-6f6b679f8f-h4wmk" [39b276c0-68ef-4dc9-9f73-ee79c2c14625] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262467  152550 system_pods.go:61] "coredns-6f6b679f8f-l5z8f" [7e0082cc-2364-499c-bdb8-5f2ee7ee5fa7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262473  152550 system_pods.go:61] "etcd-embed-certs-923586" [06d68f69-a99f-4b34-87c7-e2fb80cdd886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:09:45.262481  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [2d0952e2-f5d9-49e8-b957-00f92dbbc436] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:09:45.262490  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [2e632e39-6249-40e3-82ab-74e820a84f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:09:45.262495  152550 system_pods.go:61] "kube-proxy-wfl6s" [9f690d4f-11ee-4e67-aa8a-2c3e304d699d] Running
	I0826 12:09:45.262500  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [47d66689-0a4c-4811-b4f0-2481034f1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:09:45.262505  152550 system_pods.go:61] "metrics-server-6867b74b74-cw5t8" [1bced435-db48-46d6-9c76-fb13050a7851] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:09:45.262510  152550 system_pods.go:61] "storage-provisioner" [259f7851-96da-42c3-aae3-35d13ec21573] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:09:45.262522  152550 system_pods.go:74] duration metric: took 12.449002ms to wait for pod list to return data ...
	I0826 12:09:45.262531  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:09:45.276323  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:09:45.276359  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:09:45.276372  152550 node_conditions.go:105] duration metric: took 13.836307ms to run NodePressure ...
	I0826 12:09:45.276389  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:45.558970  152550 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563147  152550 kubeadm.go:739] kubelet initialised
	I0826 12:09:45.563168  152550 kubeadm.go:740] duration metric: took 4.16477ms waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563176  152550 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:09:45.574933  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.581504  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581530  152550 pod_ready.go:82] duration metric: took 6.568456ms for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.581548  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581557  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.587904  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587935  152550 pod_ready.go:82] duration metric: took 6.368664ms for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.587945  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587956  152550 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.592416  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592440  152550 pod_ready.go:82] duration metric: took 4.475923ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.592448  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592453  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.654230  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654265  152550 pod_ready.go:82] duration metric: took 61.80344ms for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.654275  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654282  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:47.659899  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:46.902687  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:46.903209  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:46.903243  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:46.903150  154009 retry.go:31] will retry after 4.06528556s: waiting for machine to come up
	I0826 12:09:50.972745  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973257  152982 main.go:141] libmachine: (old-k8s-version-839656) Found IP for machine: 192.168.72.136
	I0826 12:09:50.973280  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserving static IP address...
	I0826 12:09:50.973297  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has current primary IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.973653  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | skip adding static IP to network mk-old-k8s-version-839656 - found existing host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"}
	I0826 12:09:50.973672  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserved static IP address: 192.168.72.136
	I0826 12:09:50.973693  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting for SSH to be available...
	I0826 12:09:50.973737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Getting to WaitForSSH function...
	I0826 12:09:50.976028  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976406  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.976438  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976544  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH client type: external
	I0826 12:09:50.976598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa (-rw-------)
	I0826 12:09:50.976622  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:50.976632  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | About to run SSH command:
	I0826 12:09:50.976642  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | exit 0
	I0826 12:09:51.107476  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:51.107964  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 12:09:51.108748  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.111740  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112251  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.112281  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112613  152982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 12:09:51.112820  152982 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:51.112842  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.113094  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.115616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116011  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.116042  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116213  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.116382  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116483  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116618  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.116815  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.117105  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.117120  152982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:51.219189  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:51.219220  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219528  152982 buildroot.go:166] provisioning hostname "old-k8s-version-839656"
	I0826 12:09:51.219558  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219798  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.222773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223300  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.223337  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223511  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.223750  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.223975  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.224156  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.224364  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.224610  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.224625  152982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-839656 && echo "old-k8s-version-839656" | sudo tee /etc/hostname
	I0826 12:09:51.340951  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-839656
	
	I0826 12:09:51.340995  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.343773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344119  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.344144  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344312  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.344531  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344731  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344865  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.345037  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.345207  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.345224  152982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-839656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-839656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-839656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:51.456135  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:51.456180  152982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:51.456233  152982 buildroot.go:174] setting up certificates
	I0826 12:09:51.456247  152982 provision.go:84] configureAuth start
	I0826 12:09:51.456263  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.456585  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.459426  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.459852  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.459895  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.460083  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.462404  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462754  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.462788  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462984  152982 provision.go:143] copyHostCerts
	I0826 12:09:51.463042  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:51.463061  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:51.463118  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:51.463225  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:51.463235  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:51.463255  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:51.463306  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:51.463313  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:51.463331  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:51.463381  152982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-839656 san=[127.0.0.1 192.168.72.136 localhost minikube old-k8s-version-839656]
	I0826 12:09:51.533462  152982 provision.go:177] copyRemoteCerts
	I0826 12:09:51.533528  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:51.533556  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.536586  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.536967  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.536991  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.537268  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.537519  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.537729  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.537894  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:51.617503  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:51.642966  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0826 12:09:51.669120  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:51.693595  152982 provision.go:87] duration metric: took 237.331736ms to configureAuth
	I0826 12:09:51.693629  152982 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:51.693808  152982 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:09:51.693895  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.697161  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697508  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.697553  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697789  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.698042  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698207  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698394  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.698565  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.698798  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.698819  152982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:52.187972  153366 start.go:364] duration metric: took 2m56.271360342s to acquireMachinesLock for "default-k8s-diff-port-697869"
	I0826 12:09:52.188045  153366 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:52.188053  153366 fix.go:54] fixHost starting: 
	I0826 12:09:52.188497  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:52.188541  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:52.209451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0826 12:09:52.209960  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:52.210572  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:09:52.210591  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:52.211008  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:52.211232  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:09:52.211382  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:09:52.213165  153366 fix.go:112] recreateIfNeeded on default-k8s-diff-port-697869: state=Stopped err=<nil>
	I0826 12:09:52.213198  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	W0826 12:09:52.213359  153366 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:52.215535  153366 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-697869" ...
	I0826 12:09:49.662002  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.663287  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.959544  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:51.959580  152982 machine.go:96] duration metric: took 846.74482ms to provisionDockerMachine
	I0826 12:09:51.959595  152982 start.go:293] postStartSetup for "old-k8s-version-839656" (driver="kvm2")
	I0826 12:09:51.959606  152982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:51.959628  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.959989  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:51.960024  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.962912  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963278  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.963304  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963520  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.963756  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.963954  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.964082  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.046059  152982 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:52.050013  152982 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:52.050045  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:52.050119  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:52.050225  152982 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:52.050345  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:52.059871  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:52.082494  152982 start.go:296] duration metric: took 122.880191ms for postStartSetup
	I0826 12:09:52.082546  152982 fix.go:56] duration metric: took 19.890844987s for fixHost
	I0826 12:09:52.082576  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.085291  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085726  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.085772  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085898  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.086116  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086307  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086457  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.086659  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:52.086841  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:52.086856  152982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:52.187806  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674192.159623045
	
	I0826 12:09:52.187839  152982 fix.go:216] guest clock: 1724674192.159623045
	I0826 12:09:52.187846  152982 fix.go:229] Guest: 2024-08-26 12:09:52.159623045 +0000 UTC Remote: 2024-08-26 12:09:52.082552402 +0000 UTC m=+250.413281630 (delta=77.070643ms)
	I0826 12:09:52.187868  152982 fix.go:200] guest clock delta is within tolerance: 77.070643ms
	I0826 12:09:52.187873  152982 start.go:83] releasing machines lock for "old-k8s-version-839656", held for 19.996211523s
	I0826 12:09:52.187905  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.188210  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:52.191003  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191480  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.191511  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191670  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192375  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192595  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192733  152982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:52.192794  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.192854  152982 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:52.192883  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.195598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195757  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195965  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.195994  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196172  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196256  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.196290  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196424  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196463  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196624  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196627  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196812  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196842  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.196954  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.304741  152982 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:52.311072  152982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:52.457568  152982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:52.465123  152982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:52.465211  152982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:52.487320  152982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:52.487351  152982 start.go:495] detecting cgroup driver to use...
	I0826 12:09:52.487459  152982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:52.509680  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:52.526517  152982 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:52.526615  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:52.540741  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:52.554819  152982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:52.677611  152982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:52.829605  152982 docker.go:233] disabling docker service ...
	I0826 12:09:52.829706  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:52.844862  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:52.859869  152982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:53.021968  152982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:53.156768  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:53.173028  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:53.194573  152982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 12:09:53.194641  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.204783  152982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:53.204873  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.215395  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.225578  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.235810  152982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:53.246635  152982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:53.257399  152982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:53.257467  152982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:53.273553  152982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:53.283339  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:53.432394  152982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:53.583340  152982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:53.583443  152982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:53.590729  152982 start.go:563] Will wait 60s for crictl version
	I0826 12:09:53.590877  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:53.596292  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:53.656413  152982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:53.656523  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.685569  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.716571  152982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 12:09:52.217358  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Start
	I0826 12:09:52.217561  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring networks are active...
	I0826 12:09:52.218523  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network default is active
	I0826 12:09:52.218930  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network mk-default-k8s-diff-port-697869 is active
	I0826 12:09:52.219443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Getting domain xml...
	I0826 12:09:52.220240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Creating domain...
	I0826 12:09:53.637205  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting to get IP...
	I0826 12:09:53.638259  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638719  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638757  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.638648  154153 retry.go:31] will retry after 309.073725ms: waiting for machine to come up
	I0826 12:09:53.949323  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.949986  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.950021  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.949941  154153 retry.go:31] will retry after 389.554302ms: waiting for machine to come up
	I0826 12:09:54.341836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342416  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.342359  154153 retry.go:31] will retry after 314.065385ms: waiting for machine to come up
	I0826 12:09:54.657763  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658394  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658425  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.658336  154153 retry.go:31] will retry after 564.24487ms: waiting for machine to come up
	I0826 12:09:55.224230  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224610  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.224578  154153 retry.go:31] will retry after 685.123739ms: waiting for machine to come up
	I0826 12:09:53.718104  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:53.721461  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.721900  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:53.721938  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.722137  152982 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:53.726404  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:53.738999  152982 kubeadm.go:883] updating cluster {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:53.739130  152982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 12:09:53.739182  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:53.791456  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:53.791561  152982 ssh_runner.go:195] Run: which lz4
	I0826 12:09:53.795624  152982 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:53.799857  152982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:53.799892  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 12:09:55.402637  152982 crio.go:462] duration metric: took 1.607044522s to copy over tarball
	I0826 12:09:55.402746  152982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:54.163063  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.662394  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.662428  152550 pod_ready.go:82] duration metric: took 10.008136426s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.662445  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668522  152550 pod_ready.go:93] pod "kube-proxy-wfl6s" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.668557  152550 pod_ready.go:82] duration metric: took 6.10318ms for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668571  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:57.675036  152550 pod_ready.go:103] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.911914  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912484  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.912420  154153 retry.go:31] will retry after 578.675355ms: waiting for machine to come up
	I0826 12:09:56.493183  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493668  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:56.493552  154153 retry.go:31] will retry after 793.710444ms: waiting for machine to come up
	I0826 12:09:57.289554  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290128  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290160  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:57.290070  154153 retry.go:31] will retry after 1.099676217s: waiting for machine to come up
	I0826 12:09:58.391500  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392029  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392060  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:58.391966  154153 retry.go:31] will retry after 1.753296062s: waiting for machine to come up
	I0826 12:10:00.148179  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148759  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148795  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:00.148689  154153 retry.go:31] will retry after 1.591840738s: waiting for machine to come up
	I0826 12:09:58.462705  152982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059925579s)
	I0826 12:09:58.462738  152982 crio.go:469] duration metric: took 3.060066141s to extract the tarball
	I0826 12:09:58.462748  152982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:58.504763  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:58.547876  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:58.547908  152982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:09:58.548002  152982 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.548020  152982 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.548047  152982 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.548058  152982 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.548025  152982 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.548107  152982 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.548041  152982 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 12:09:58.548004  152982 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550035  152982 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.550050  152982 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.550064  152982 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.550039  152982 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 12:09:58.550090  152982 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550045  152982 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.550125  152982 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.550071  152982 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.785285  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.798866  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 12:09:58.801333  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.803488  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.845454  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.845683  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.851257  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.875512  152982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 12:09:58.875632  152982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.875702  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.899151  152982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 12:09:58.899204  152982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 12:09:58.899268  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.947547  152982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 12:09:58.947602  152982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.947657  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.960126  152982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 12:09:58.960178  152982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.960229  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.978450  152982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 12:09:58.978504  152982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.978571  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.981296  152982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 12:09:58.981335  152982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.981378  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990296  152982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 12:09:58.990341  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.990351  152982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.990398  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990481  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:58.990549  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.990624  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.993238  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.993297  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.117393  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.117394  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.137340  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.137381  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.137396  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.139282  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.140553  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.237314  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.242110  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.293209  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.293288  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.310442  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.316239  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.316345  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.382180  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.382851  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:59.389447  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 12:09:59.454424  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 12:09:59.484709  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 12:09:59.491496  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 12:09:59.491517  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 12:09:59.491555  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 12:09:59.495411  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 12:09:59.614016  152982 cache_images.go:92] duration metric: took 1.066082637s to LoadCachedImages
	W0826 12:09:59.614118  152982 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0826 12:09:59.614133  152982 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.20.0 crio true true} ...
	I0826 12:09:59.614248  152982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-839656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:59.614345  152982 ssh_runner.go:195] Run: crio config
	I0826 12:09:59.661938  152982 cni.go:84] Creating CNI manager for ""
	I0826 12:09:59.661962  152982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:59.661975  152982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:59.661994  152982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-839656 NodeName:old-k8s-version-839656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 12:09:59.662131  152982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-839656"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:59.662212  152982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 12:09:59.672820  152982 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:59.672907  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:59.682949  152982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0826 12:09:59.701705  152982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:59.719839  152982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0826 12:09:59.737712  152982 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:59.741301  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:59.753857  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:59.877969  152982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:59.896278  152982 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656 for IP: 192.168.72.136
	I0826 12:09:59.896306  152982 certs.go:194] generating shared ca certs ...
	I0826 12:09:59.896337  152982 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:59.896522  152982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:59.896620  152982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:59.896640  152982 certs.go:256] generating profile certs ...
	I0826 12:09:59.896769  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key
	I0826 12:09:59.896903  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261
	I0826 12:09:59.896972  152982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key
	I0826 12:09:59.897126  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:59.897165  152982 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:59.897178  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:59.897216  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:59.897261  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:59.897303  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:59.897362  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:59.898051  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:59.938407  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:59.983455  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:00.021803  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:00.058157  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 12:10:00.095920  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:00.133185  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:00.167537  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:00.193940  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:00.220558  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:00.245567  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:00.274758  152982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:00.296741  152982 ssh_runner.go:195] Run: openssl version
	I0826 12:10:00.305185  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:00.321395  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326339  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326422  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.332789  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:00.343971  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:00.355979  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360900  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360985  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.367085  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:00.379942  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:00.391907  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396769  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396845  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.403009  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:00.416262  152982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:00.422105  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:00.428526  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:00.435241  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:00.441902  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:00.448502  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:00.455012  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:00.461390  152982 kubeadm.go:392] StartCluster: {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:00.461533  152982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:00.461596  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.503939  152982 cri.go:89] found id: ""
	I0826 12:10:00.504026  152982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:00.515410  152982 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:00.515434  152982 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:00.515483  152982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:00.527240  152982 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:00.528558  152982 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:10:00.529540  152982 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-839656" cluster setting kubeconfig missing "old-k8s-version-839656" context setting]
	I0826 12:10:00.530977  152982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:00.618477  152982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:00.630233  152982 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
	I0826 12:10:00.630283  152982 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:00.630300  152982 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:00.630367  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.667438  152982 cri.go:89] found id: ""
	I0826 12:10:00.667535  152982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:00.685319  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:00.695968  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:00.696003  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:00.696087  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:00.706519  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:00.706583  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:00.716807  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:00.726555  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:00.726637  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:00.737356  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.747702  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:00.747773  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.758771  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:00.769257  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:00.769345  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:00.780102  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:00.791976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:00.922432  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:58.196998  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:58.197024  152550 pod_ready.go:82] duration metric: took 2.528445128s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:58.197035  152550 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:00.486854  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:02.704500  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:01.741774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742399  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:01.742299  154153 retry.go:31] will retry after 2.754846919s: waiting for machine to come up
	I0826 12:10:04.499575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499918  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499950  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:04.499866  154153 retry.go:31] will retry after 2.260097113s: waiting for machine to come up
	I0826 12:10:02.146027  152982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223548629s)
	I0826 12:10:02.146087  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.407469  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.511616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.629123  152982 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:02.629250  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.129448  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.629685  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.129759  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.629807  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.129526  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.629782  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.129949  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.630031  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.203846  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:07.703046  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:06.761311  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761805  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:06.761731  154153 retry.go:31] will retry after 3.424580644s: waiting for machine to come up
	I0826 12:10:10.188178  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188746  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has current primary IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Found IP for machine: 192.168.61.11
	I0826 12:10:10.188789  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserving static IP address...
	I0826 12:10:10.189233  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.189270  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | skip adding static IP to network mk-default-k8s-diff-port-697869 - found existing host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"}
	I0826 12:10:10.189292  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserved static IP address: 192.168.61.11
	I0826 12:10:10.189312  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for SSH to be available...
	I0826 12:10:10.189327  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Getting to WaitForSSH function...
	I0826 12:10:10.191775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192162  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.192192  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192272  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH client type: external
	I0826 12:10:10.192300  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa (-rw-------)
	I0826 12:10:10.192332  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:10.192351  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | About to run SSH command:
	I0826 12:10:10.192364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | exit 0
	I0826 12:10:10.315078  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:10.315506  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetConfigRaw
	I0826 12:10:10.316191  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.318850  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319207  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.319235  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319491  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:10:10.319715  153366 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:10.319736  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:10.320045  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.322352  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322660  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.322682  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322852  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.323067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323216  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323371  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.323524  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.323732  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.323745  153366 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:10.427284  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:10.427314  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427630  153366 buildroot.go:166] provisioning hostname "default-k8s-diff-port-697869"
	I0826 12:10:10.427661  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.430485  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.430865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.430894  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.431065  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.431240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431388  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431507  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.431631  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.431804  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.431818  153366 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-697869 && echo "default-k8s-diff-port-697869" | sudo tee /etc/hostname
	I0826 12:10:10.544414  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-697869
	
	I0826 12:10:10.544455  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.547901  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548333  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.548375  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548612  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.548835  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549074  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549250  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.549458  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.549632  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.549648  153366 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-697869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-697869/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-697869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:10.659809  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:10.659858  153366 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:10.659937  153366 buildroot.go:174] setting up certificates
	I0826 12:10:10.659957  153366 provision.go:84] configureAuth start
	I0826 12:10:10.659978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.660304  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.663231  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.663628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663798  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.666261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666603  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.666630  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666827  153366 provision.go:143] copyHostCerts
	I0826 12:10:10.666912  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:10.666937  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:10.667005  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:10.667125  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:10.667137  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:10.667164  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:10.667239  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:10.667249  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:10.667273  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:10.667344  153366 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-697869 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-697869 localhost minikube]
	I0826 12:10:11.491531  152463 start.go:364] duration metric: took 54.190046907s to acquireMachinesLock for "no-preload-956479"
	I0826 12:10:11.491592  152463 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:10:11.491601  152463 fix.go:54] fixHost starting: 
	I0826 12:10:11.492032  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:10:11.492066  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:10:11.509260  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
	I0826 12:10:11.509870  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:10:11.510401  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:10:11.510433  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:10:11.510772  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:10:11.510983  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:11.511151  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:10:11.513024  152463 fix.go:112] recreateIfNeeded on no-preload-956479: state=Stopped err=<nil>
	I0826 12:10:11.513048  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	W0826 12:10:11.513218  152463 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:10:11.515241  152463 out.go:177] * Restarting existing kvm2 VM for "no-preload-956479" ...
	I0826 12:10:07.129729  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:07.629445  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.129308  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.629701  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.130082  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.629958  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.129963  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.629747  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.130061  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.630060  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.703400  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:11.703487  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:10.808804  153366 provision.go:177] copyRemoteCerts
	I0826 12:10:10.808865  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:10.808893  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.811758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812215  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.812251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812451  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.812664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.812817  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.813020  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:10.905741  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:10.931863  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0826 12:10:10.958232  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:10.983737  153366 provision.go:87] duration metric: took 323.761817ms to configureAuth
	I0826 12:10:10.983774  153366 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:10.983992  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:10.984092  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.986976  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987357  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.987386  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.987842  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.987978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.988105  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.988276  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.988443  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.988459  153366 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:11.257812  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:11.257846  153366 machine.go:96] duration metric: took 938.116965ms to provisionDockerMachine
	I0826 12:10:11.257861  153366 start.go:293] postStartSetup for "default-k8s-diff-port-697869" (driver="kvm2")
	I0826 12:10:11.257872  153366 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:11.257889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.258214  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:11.258246  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.261404  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261680  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.261702  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261886  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.262067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.262214  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.262386  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.345667  153366 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:11.349967  153366 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:11.350004  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:11.350084  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:11.350186  153366 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:11.350308  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:11.361671  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:11.386178  153366 start.go:296] duration metric: took 128.298803ms for postStartSetup
	I0826 12:10:11.386233  153366 fix.go:56] duration metric: took 19.198180603s for fixHost
	I0826 12:10:11.386258  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.389263  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389579  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.389606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389838  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.390034  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390172  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390287  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.390479  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:11.390666  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:11.390678  153366 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:11.491363  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674211.462689704
	
	I0826 12:10:11.491389  153366 fix.go:216] guest clock: 1724674211.462689704
	I0826 12:10:11.491401  153366 fix.go:229] Guest: 2024-08-26 12:10:11.462689704 +0000 UTC Remote: 2024-08-26 12:10:11.386238136 +0000 UTC m=+195.618286719 (delta=76.451568ms)
	I0826 12:10:11.491428  153366 fix.go:200] guest clock delta is within tolerance: 76.451568ms
	I0826 12:10:11.491433  153366 start.go:83] releasing machines lock for "default-k8s-diff-port-697869", held for 19.303413047s
	I0826 12:10:11.491459  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.491760  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:11.494596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495094  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.495124  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495330  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.495889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496208  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496333  153366 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:11.496390  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.496433  153366 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:11.496456  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.499087  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499442  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499469  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499705  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499728  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499751  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.499964  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500007  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.500134  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500164  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500359  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500349  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.500509  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.612518  153366 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:11.618693  153366 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:11.766025  153366 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:11.772405  153366 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:11.772476  153366 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:11.790401  153366 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:11.790433  153366 start.go:495] detecting cgroup driver to use...
	I0826 12:10:11.790505  153366 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:11.806946  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:11.822137  153366 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:11.822199  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:11.836496  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:11.851090  153366 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:11.963366  153366 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:12.113326  153366 docker.go:233] disabling docker service ...
	I0826 12:10:12.113402  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:12.131489  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:12.148801  153366 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:12.293074  153366 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:12.420202  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:12.435061  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:12.455192  153366 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:12.455268  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.467004  153366 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:12.467079  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.477903  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.488979  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.500322  153366 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:12.513490  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.525746  153366 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.544944  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.556159  153366 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:12.566333  153366 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:12.566420  153366 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:12.584702  153366 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:12.596221  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:12.740368  153366 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:12.882412  153366 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:12.882501  153366 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:12.888373  153366 start.go:563] Will wait 60s for crictl version
	I0826 12:10:12.888446  153366 ssh_runner.go:195] Run: which crictl
	I0826 12:10:12.892415  153366 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:12.930486  153366 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:12.930577  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.959322  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.997340  153366 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:11.516801  152463 main.go:141] libmachine: (no-preload-956479) Calling .Start
	I0826 12:10:11.517026  152463 main.go:141] libmachine: (no-preload-956479) Ensuring networks are active...
	I0826 12:10:11.517932  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network default is active
	I0826 12:10:11.518378  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network mk-no-preload-956479 is active
	I0826 12:10:11.518950  152463 main.go:141] libmachine: (no-preload-956479) Getting domain xml...
	I0826 12:10:11.519889  152463 main.go:141] libmachine: (no-preload-956479) Creating domain...
	I0826 12:10:12.859267  152463 main.go:141] libmachine: (no-preload-956479) Waiting to get IP...
	I0826 12:10:12.860407  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:12.860889  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:12.860933  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:12.860840  154342 retry.go:31] will retry after 295.429691ms: waiting for machine to come up
	I0826 12:10:13.158650  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.159259  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.159290  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.159207  154342 retry.go:31] will retry after 385.646499ms: waiting for machine to come up
	I0826 12:10:13.547162  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.547722  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.547754  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.547631  154342 retry.go:31] will retry after 390.965905ms: waiting for machine to come up
	I0826 12:10:13.940240  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.940777  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.940820  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.940714  154342 retry.go:31] will retry after 457.995779ms: waiting for machine to come up
	I0826 12:10:14.400465  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:14.400981  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:14.401016  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:14.400917  154342 retry.go:31] will retry after 697.078299ms: waiting for machine to come up
	I0826 12:10:12.998786  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:13.001919  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002340  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:13.002376  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002627  153366 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:13.007888  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:13.023470  153366 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:13.023599  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:13.023666  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:13.060321  153366 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:13.060405  153366 ssh_runner.go:195] Run: which lz4
	I0826 12:10:13.064638  153366 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:10:13.069089  153366 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:10:13.069126  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:10:14.437617  153366 crio.go:462] duration metric: took 1.373030307s to copy over tarball
	I0826 12:10:14.437710  153366 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:10:12.129652  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:12.630076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.129342  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.630081  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.130129  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.629381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.129909  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.630114  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.129784  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.629463  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.704867  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:16.204819  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:15.099404  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:15.100002  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:15.100035  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:15.099956  154342 retry.go:31] will retry after 947.348263ms: waiting for machine to come up
	I0826 12:10:16.048628  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:16.049166  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:16.049185  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:16.049113  154342 retry.go:31] will retry after 1.169467339s: waiting for machine to come up
	I0826 12:10:17.219998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:17.220564  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:17.220589  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:17.220541  154342 retry.go:31] will retry after 945.873541ms: waiting for machine to come up
	I0826 12:10:18.167823  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:18.168351  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:18.168377  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:18.168272  154342 retry.go:31] will retry after 1.495556294s: waiting for machine to come up
	I0826 12:10:19.666032  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:19.666629  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:19.666656  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:19.666551  154342 retry.go:31] will retry after 1.710448725s: waiting for machine to come up
	I0826 12:10:16.739676  153366 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301910814s)
	I0826 12:10:16.739718  153366 crio.go:469] duration metric: took 2.302064986s to extract the tarball
	I0826 12:10:16.739729  153366 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:10:16.777127  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:16.820340  153366 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:10:16.820367  153366 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:10:16.820376  153366 kubeadm.go:934] updating node { 192.168.61.11 8444 v1.31.0 crio true true} ...
	I0826 12:10:16.820500  153366 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-697869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:16.820619  153366 ssh_runner.go:195] Run: crio config
	I0826 12:10:16.868670  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:16.868694  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:16.868708  153366 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:16.868738  153366 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-697869 NodeName:default-k8s-diff-port-697869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:16.868915  153366 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-697869"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:16.869010  153366 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:16.883092  153366 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:16.883230  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:16.893951  153366 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0826 12:10:16.911836  153366 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:16.928582  153366 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0826 12:10:16.945593  153366 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:16.949432  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:16.961659  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:17.085246  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:17.103244  153366 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869 for IP: 192.168.61.11
	I0826 12:10:17.103271  153366 certs.go:194] generating shared ca certs ...
	I0826 12:10:17.103302  153366 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:17.103510  153366 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:17.103575  153366 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:17.103585  153366 certs.go:256] generating profile certs ...
	I0826 12:10:17.103700  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/client.key
	I0826 12:10:17.103787  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key.bfd30dfa
	I0826 12:10:17.103839  153366 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key
	I0826 12:10:17.103989  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:17.104033  153366 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:17.104045  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:17.104088  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:17.104138  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:17.104169  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:17.104226  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:17.105131  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:17.133445  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:17.170369  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:17.203828  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:17.239736  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0826 12:10:17.270804  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:10:17.311143  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:17.337241  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:10:17.361255  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:17.389089  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:17.415203  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:17.440069  153366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:17.457711  153366 ssh_runner.go:195] Run: openssl version
	I0826 12:10:17.463825  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:17.475007  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479590  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479674  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.485682  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:17.496820  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:17.507770  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512284  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512360  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.518185  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:17.530028  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:17.541213  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546412  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546492  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.552969  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:17.565000  153366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:17.570123  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:17.576431  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:17.582447  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:17.588686  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:17.595338  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:17.601487  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:17.607923  153366 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:17.608035  153366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:17.608125  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.647040  153366 cri.go:89] found id: ""
	I0826 12:10:17.647140  153366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:17.657597  153366 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:17.657623  153366 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:17.657696  153366 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:17.667949  153366 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:17.669056  153366 kubeconfig.go:125] found "default-k8s-diff-port-697869" server: "https://192.168.61.11:8444"
	I0826 12:10:17.671281  153366 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:17.680798  153366 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I0826 12:10:17.680847  153366 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:17.680862  153366 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:17.680921  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.718772  153366 cri.go:89] found id: ""
	I0826 12:10:17.718890  153366 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:17.737115  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:17.747272  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:17.747300  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:17.747365  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:10:17.757172  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:17.757253  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:17.767325  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:10:17.779947  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:17.780022  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:17.789867  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.799532  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:17.799614  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.812714  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:10:17.825162  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:17.825246  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:17.838058  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:17.855348  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:17.976993  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:18.821196  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.025876  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.104571  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.198607  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:19.198729  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.698978  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.198987  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.246044  153366 api_server.go:72] duration metric: took 1.047451922s to wait for apiserver process to appear ...
	I0826 12:10:20.246077  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:20.246102  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:20.246682  153366 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0826 12:10:20.747149  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:17.129856  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:17.629845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.129411  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.629780  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.129980  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.629521  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.129719  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.630349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.130078  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.629658  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.704382  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:20.705290  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:22.705625  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:21.379594  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:21.380141  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:21.380174  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:21.380054  154342 retry.go:31] will retry after 2.588125482s: waiting for machine to come up
	I0826 12:10:23.969901  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:23.970463  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:23.970492  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:23.970429  154342 retry.go:31] will retry after 2.959609618s: waiting for machine to come up
	I0826 12:10:22.736733  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.736773  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.736792  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.767927  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.767978  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.767998  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.815605  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:22.815647  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.247226  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.265036  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.265070  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.746536  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.761050  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.761087  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.246584  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.256796  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.256832  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.746370  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.751618  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.751659  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.246161  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.250242  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.250271  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.746903  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.751494  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.751522  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:26.246579  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:26.251290  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:10:26.257484  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:26.257519  153366 api_server.go:131] duration metric: took 6.01143401s to wait for apiserver health ...
	I0826 12:10:26.257529  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:26.257536  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:26.259498  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:22.130431  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:22.630197  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.129672  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.630044  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.129562  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.629554  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.129334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.630351  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.130136  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.629461  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.203975  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.704731  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:26.932057  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:26.932632  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:26.932665  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:26.932547  154342 retry.go:31] will retry after 3.538498107s: waiting for machine to come up
	I0826 12:10:26.260852  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:26.271312  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:26.290104  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:26.299800  153366 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:26.299843  153366 system_pods.go:61] "coredns-6f6b679f8f-d5f9l" [7761358c-70cb-40e1-98c2-322335e33053] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:26.299852  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [877bd1a3-67e5-4208-96f7-242f6a6edd76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:26.299858  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [e2d33714-bff0-480b-9619-ed28f0fbbbe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:26.299868  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [f858c23a-d87e-4f1e-bffa-0bdd8ded996f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:26.299872  153366 system_pods.go:61] "kube-proxy-lvsx9" [12112756-81ed-415f-9033-cb9effdd20ee] Running
	I0826 12:10:26.299880  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [d8991013-f5ee-4df3-b48a-d6546417999a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:26.299885  153366 system_pods.go:61] "metrics-server-6867b74b74-spxx8" [1d5d9b1e-05f3-4b59-98a8-8d8f127be3c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:26.299889  153366 system_pods.go:61] "storage-provisioner" [ac2ac441-92f0-467a-a0da-fe4b8e4da50c] Running
	I0826 12:10:26.299896  153366 system_pods.go:74] duration metric: took 9.758032ms to wait for pod list to return data ...
	I0826 12:10:26.299903  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:26.303810  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:26.303848  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:26.303865  153366 node_conditions.go:105] duration metric: took 3.956287ms to run NodePressure ...
	I0826 12:10:26.303888  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:26.568053  153366 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573755  153366 kubeadm.go:739] kubelet initialised
	I0826 12:10:26.573793  153366 kubeadm.go:740] duration metric: took 5.692563ms waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573810  153366 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:26.580178  153366 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:28.585940  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.587027  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.129634  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:27.629356  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.130029  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.629993  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.130030  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.629424  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.129476  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.630209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.129435  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.630170  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.203373  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:32.204503  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.474603  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475145  152463 main.go:141] libmachine: (no-preload-956479) Found IP for machine: 192.168.50.213
	I0826 12:10:30.475172  152463 main.go:141] libmachine: (no-preload-956479) Reserving static IP address...
	I0826 12:10:30.475184  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has current primary IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475655  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.475688  152463 main.go:141] libmachine: (no-preload-956479) DBG | skip adding static IP to network mk-no-preload-956479 - found existing host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"}
	I0826 12:10:30.475705  152463 main.go:141] libmachine: (no-preload-956479) Reserved static IP address: 192.168.50.213
	I0826 12:10:30.475724  152463 main.go:141] libmachine: (no-preload-956479) Waiting for SSH to be available...
	I0826 12:10:30.475749  152463 main.go:141] libmachine: (no-preload-956479) DBG | Getting to WaitForSSH function...
	I0826 12:10:30.477762  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478222  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.478256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478323  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH client type: external
	I0826 12:10:30.478352  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa (-rw-------)
	I0826 12:10:30.478400  152463 main.go:141] libmachine: (no-preload-956479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:30.478423  152463 main.go:141] libmachine: (no-preload-956479) DBG | About to run SSH command:
	I0826 12:10:30.478431  152463 main.go:141] libmachine: (no-preload-956479) DBG | exit 0
	I0826 12:10:30.607143  152463 main.go:141] libmachine: (no-preload-956479) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:30.607526  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetConfigRaw
	I0826 12:10:30.608312  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.611028  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611425  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.611461  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611664  152463 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:10:30.611888  152463 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:30.611920  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:30.612166  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.614651  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615221  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.615253  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615430  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.615623  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615802  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615987  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.616182  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.616357  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.616367  152463 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:30.719178  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:30.719220  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719544  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:10:30.719577  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719829  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.722665  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723083  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.723112  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723299  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.723479  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723805  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.723965  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.724136  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.724154  152463 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956479 && echo "no-preload-956479" | sudo tee /etc/hostname
	I0826 12:10:30.844510  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956479
	
	I0826 12:10:30.844551  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.848147  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848601  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.848636  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848846  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.849053  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849234  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849371  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.849537  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.849711  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.849726  152463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956479/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:30.963743  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:30.963781  152463 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:30.963831  152463 buildroot.go:174] setting up certificates
	I0826 12:10:30.963844  152463 provision.go:84] configureAuth start
	I0826 12:10:30.963858  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.964223  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.967426  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.967922  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.967947  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.968210  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.970910  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971231  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.971268  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971381  152463 provision.go:143] copyHostCerts
	I0826 12:10:30.971439  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:30.971462  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:30.971515  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:30.971610  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:30.971620  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:30.971641  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:30.971695  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:30.971708  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:30.971726  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:30.971773  152463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.no-preload-956479 san=[127.0.0.1 192.168.50.213 localhost minikube no-preload-956479]
	I0826 12:10:31.209813  152463 provision.go:177] copyRemoteCerts
	I0826 12:10:31.209904  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:31.209939  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.213380  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.213880  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.213921  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.214161  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.214387  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.214543  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.214669  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.304972  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:31.332069  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:10:31.359526  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:31.387988  152463 provision.go:87] duration metric: took 424.128041ms to configureAuth
	I0826 12:10:31.388025  152463 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:31.388248  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:31.388342  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.392126  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392495  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.392527  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.393069  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393276  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393443  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.393636  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.393812  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.393830  152463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:31.673101  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:31.673134  152463 machine.go:96] duration metric: took 1.061231135s to provisionDockerMachine
	I0826 12:10:31.673147  152463 start.go:293] postStartSetup for "no-preload-956479" (driver="kvm2")
	I0826 12:10:31.673157  152463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:31.673173  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.673523  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:31.673556  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.676692  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677097  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.677142  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677349  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.677558  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.677702  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.677822  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.757940  152463 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:31.762636  152463 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:31.762668  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:31.762755  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:31.762887  152463 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:31.763005  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:31.773596  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:31.805712  152463 start.go:296] duration metric: took 132.547938ms for postStartSetup
	I0826 12:10:31.805772  152463 fix.go:56] duration metric: took 20.314170869s for fixHost
	I0826 12:10:31.805799  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.809143  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809503  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.809539  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.810034  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810552  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.810714  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.810950  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.810964  152463 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:31.919562  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674231.878777554
	
	I0826 12:10:31.919593  152463 fix.go:216] guest clock: 1724674231.878777554
	I0826 12:10:31.919605  152463 fix.go:229] Guest: 2024-08-26 12:10:31.878777554 +0000 UTC Remote: 2024-08-26 12:10:31.805776925 +0000 UTC m=+357.093278934 (delta=73.000629ms)
	I0826 12:10:31.919635  152463 fix.go:200] guest clock delta is within tolerance: 73.000629ms
	I0826 12:10:31.919653  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 20.428086051s
	I0826 12:10:31.919683  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.919994  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:31.922926  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923273  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.923305  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923492  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924019  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924217  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924314  152463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:31.924361  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.924462  152463 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:31.924485  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.927256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927510  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927697  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927724  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927869  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.927977  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.928076  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928245  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.928265  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928507  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.928547  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928816  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:32.013240  152463 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:32.047898  152463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:32.200554  152463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:32.207077  152463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:32.207149  152463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:32.223842  152463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:32.223869  152463 start.go:495] detecting cgroup driver to use...
	I0826 12:10:32.223931  152463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:32.241232  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:32.256522  152463 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:32.256594  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:32.271203  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:32.286062  152463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:32.422959  152463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:32.596450  152463 docker.go:233] disabling docker service ...
	I0826 12:10:32.596518  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:32.610684  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:32.624456  152463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:32.754300  152463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:32.880447  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:32.895761  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:32.915507  152463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:32.915579  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.926244  152463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:32.926323  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.936322  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.947292  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.958349  152463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:32.969332  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.981643  152463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.003757  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.014520  152463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:33.024134  152463 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:33.024220  152463 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:33.036667  152463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:33.046675  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:33.166681  152463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:33.314047  152463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:33.314136  152463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:33.319922  152463 start.go:563] Will wait 60s for crictl version
	I0826 12:10:33.320002  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.323747  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:33.363172  152463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:33.363268  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.391607  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.422180  152463 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:33.423515  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:33.426749  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427279  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:33.427316  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427559  152463 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:33.431826  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:33.443984  152463 kubeadm.go:883] updating cluster {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:33.444119  152463 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:33.444160  152463 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:33.478886  152463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:33.478919  152463 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:10:33.478977  152463 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.478997  152463 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.479029  152463 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.479079  152463 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 12:10:33.479002  152463 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.479095  152463 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.479153  152463 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.479157  152463 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480618  152463 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.480616  152463 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.480650  152463 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.480654  152463 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480623  152463 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.480628  152463 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.480629  152463 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.480763  152463 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0826 12:10:33.713473  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0826 12:10:33.725267  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.737490  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.787737  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.801836  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.807734  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.873480  152463 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0826 12:10:33.873546  152463 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.873617  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.873493  152463 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0826 12:10:33.873741  152463 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.873772  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.889641  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.921098  152463 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0826 12:10:33.921226  152463 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.921326  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.921170  152463 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0826 12:10:33.921463  152463 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.921499  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.930650  152463 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0826 12:10:33.930702  152463 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.930720  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.930738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.930743  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.973954  152463 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0826 12:10:33.974005  152463 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.974042  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.974059  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.974053  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:34.013541  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.013571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.013542  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.053966  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.053985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.068414  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.116750  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.116778  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.164943  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.172957  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.204571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.230985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.236650  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.270826  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0826 12:10:34.270990  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.304050  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0826 12:10:34.304147  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:34.308251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0826 12:10:34.308374  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:34.335314  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.348389  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.351251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0826 12:10:34.351376  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:34.359812  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0826 12:10:34.359842  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0826 12:10:34.359863  152463 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.359891  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0826 12:10:34.359921  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0826 12:10:34.359948  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:34.359952  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.400500  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0826 12:10:34.400644  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:34.428715  152463 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0826 12:10:34.428758  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0826 12:10:34.428776  152463 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.428802  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0826 12:10:34.428855  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:31.586509  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:31.586539  153366 pod_ready.go:82] duration metric: took 5.006322441s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:31.586549  153366 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:33.593060  153366 pod_ready.go:103] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:34.092728  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:34.092762  153366 pod_ready.go:82] duration metric: took 2.506204888s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:34.092775  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:32.130190  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:32.630331  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.129323  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.629368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.129667  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.629421  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.130330  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.630142  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.130340  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.629400  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.205203  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.704302  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.449383  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.089320181s)
	I0826 12:10:36.449436  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0826 12:10:36.449447  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.048765538s)
	I0826 12:10:36.449467  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449481  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0826 12:10:36.449509  152463 ssh_runner.go:235] Completed: which crictl: (2.020634497s)
	I0826 12:10:36.449536  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449568  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.427527  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.977941403s)
	I0826 12:10:38.427585  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0826 12:10:38.427610  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427529  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.977935335s)
	I0826 12:10:38.427668  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.466259  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:36.100135  153366 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.100269  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.100296  153366 pod_ready.go:82] duration metric: took 3.007513255s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.100308  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105634  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.105658  153366 pod_ready.go:82] duration metric: took 5.341415ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105668  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110911  153366 pod_ready.go:93] pod "kube-proxy-lvsx9" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.110939  153366 pod_ready.go:82] duration metric: took 5.263436ms for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110950  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115725  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.115752  153366 pod_ready.go:82] duration metric: took 4.79279ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115765  153366 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:39.122469  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.130309  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:37.629548  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.129413  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.629384  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.130354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.629474  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.129901  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.629362  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.129862  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.629811  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.704541  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.704598  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.705026  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.616557  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.188857601s)
	I0826 12:10:40.616588  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0826 12:10:40.616614  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616634  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.150337121s)
	I0826 12:10:40.616669  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616769  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0826 12:10:40.616885  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:42.472543  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.855842642s)
	I0826 12:10:42.472583  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0826 12:10:42.472586  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.855677168s)
	I0826 12:10:42.472620  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0826 12:10:42.472625  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:42.472702  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:44.419974  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.947236189s)
	I0826 12:10:44.420011  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0826 12:10:44.420041  152463 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:44.420097  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:41.122741  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:43.123416  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:45.623931  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.130334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:42.630068  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.130212  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.629443  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.130067  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.629805  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.129753  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.629806  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.129401  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.630125  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.203266  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.205125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:48.038017  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.617897174s)
	I0826 12:10:48.038048  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0826 12:10:48.038073  152463 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.038114  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.693199  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0826 12:10:48.693251  152463 cache_images.go:123] Successfully loaded all cached images
	I0826 12:10:48.693259  152463 cache_images.go:92] duration metric: took 15.214324574s to LoadCachedImages
	I0826 12:10:48.693274  152463 kubeadm.go:934] updating node { 192.168.50.213 8443 v1.31.0 crio true true} ...
	I0826 12:10:48.693392  152463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:48.693481  152463 ssh_runner.go:195] Run: crio config
	I0826 12:10:48.748151  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:48.748176  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:48.748185  152463 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:48.748210  152463 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956479 NodeName:no-preload-956479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:48.748387  152463 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956479"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:48.748458  152463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:48.759020  152463 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:48.759097  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:48.768345  152463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0826 12:10:48.784233  152463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:48.800236  152463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0826 12:10:48.819243  152463 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:48.823154  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:48.835973  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:48.959506  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:48.977413  152463 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479 for IP: 192.168.50.213
	I0826 12:10:48.977437  152463 certs.go:194] generating shared ca certs ...
	I0826 12:10:48.977458  152463 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:48.977653  152463 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:48.977714  152463 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:48.977725  152463 certs.go:256] generating profile certs ...
	I0826 12:10:48.977827  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.key
	I0826 12:10:48.977903  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key.5be91d7c
	I0826 12:10:48.977952  152463 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key
	I0826 12:10:48.978094  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:48.978136  152463 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:48.978149  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:48.978183  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:48.978221  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:48.978252  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:48.978305  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:48.978975  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:49.029725  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:49.077908  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:49.112813  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:49.157768  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 12:10:49.201804  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:49.228271  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:49.256770  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:49.283073  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:49.316360  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:49.342284  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:49.368126  152463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:49.386334  152463 ssh_runner.go:195] Run: openssl version
	I0826 12:10:49.392457  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:49.404815  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410087  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410160  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.416900  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:49.429893  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:49.442796  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448216  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448291  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.454416  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:49.466241  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:49.477636  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482106  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482193  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.488191  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:49.499538  152463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:49.504332  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:49.510908  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:49.517549  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:49.524925  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:49.531451  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:49.537617  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:49.543680  152463 kubeadm.go:392] StartCluster: {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:49.543776  152463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:49.543843  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.587049  152463 cri.go:89] found id: ""
	I0826 12:10:49.587142  152463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:49.597911  152463 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:49.597936  152463 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:49.598001  152463 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:49.607974  152463 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:49.608976  152463 kubeconfig.go:125] found "no-preload-956479" server: "https://192.168.50.213:8443"
	I0826 12:10:49.611217  152463 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:49.622647  152463 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I0826 12:10:49.622689  152463 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:49.622706  152463 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:49.623002  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.662463  152463 cri.go:89] found id: ""
	I0826 12:10:49.662549  152463 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:49.681134  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:49.691425  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:49.691452  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:49.691512  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:49.701389  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:49.701474  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:49.713195  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:49.722708  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:49.722792  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:49.732905  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:49.742726  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:49.742814  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:48.123021  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:50.123270  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.129441  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:47.629637  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.129381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.630027  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.129789  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.630022  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.130252  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.630145  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.129544  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.629646  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.704947  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:51.705172  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:49.752415  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:49.761573  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:49.761667  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:49.771209  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:49.781057  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:49.889287  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.424782  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.640186  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.713706  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.834409  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:50.834516  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.335630  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.834665  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.851569  152463 api_server.go:72] duration metric: took 1.01717469s to wait for apiserver process to appear ...
	I0826 12:10:51.851601  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:51.851626  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:51.852167  152463 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0826 12:10:52.351709  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.441177  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.441210  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:54.441223  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.451907  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.451937  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:52.623200  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.122552  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:54.852737  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.857641  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:54.857740  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.351825  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.356325  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:55.356364  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.851867  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.858081  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:10:55.865811  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:55.865843  152463 api_server.go:131] duration metric: took 4.014234103s to wait for apiserver health ...
	I0826 12:10:55.865853  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:55.865861  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:55.867818  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:52.129473  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:52.629868  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.129585  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.629893  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.129446  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.629722  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.130173  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.629968  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.129994  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.629422  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.203474  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:56.204271  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.869434  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:55.881376  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:55.935418  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:55.955678  152463 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:55.955721  152463 system_pods.go:61] "coredns-6f6b679f8f-s9685" [b6fca294-8a78-4f7c-a466-11c76362874a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:55.955732  152463 system_pods.go:61] "etcd-no-preload-956479" [96da9402-8ea6-4418-892d-7691ab60a10d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:55.955744  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [5fe3eb03-a50c-4a7b-8c50-37262f1b165f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:55.955752  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [362950c9-4466-413e-8248-053fe4d698a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:55.955759  152463 system_pods.go:61] "kube-proxy-kwpqw" [023fc9f9-538e-43d0-a484-e2f4c75c7f34] Running
	I0826 12:10:55.955769  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [d24580b2-8a37-4aaa-8d9d-66f9eb3e0c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:55.955777  152463 system_pods.go:61] "metrics-server-6867b74b74-ldgsl" [264e96c8-430f-40fc-bb9c-7588cc28bc6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:55.955787  152463 system_pods.go:61] "storage-provisioner" [de97d99d-eda7-4ae4-8051-2fc34a2fe630] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:10:55.955803  152463 system_pods.go:74] duration metric: took 20.359455ms to wait for pod list to return data ...
	I0826 12:10:55.955815  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:55.972694  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:55.972741  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:55.972756  152463 node_conditions.go:105] duration metric: took 16.934705ms to run NodePressure ...
	I0826 12:10:55.972781  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:56.283383  152463 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288149  152463 kubeadm.go:739] kubelet initialised
	I0826 12:10:56.288173  152463 kubeadm.go:740] duration metric: took 4.75919ms waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288183  152463 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:56.292852  152463 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.297832  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297858  152463 pod_ready.go:82] duration metric: took 4.980322ms for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.297868  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297876  152463 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.302936  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302971  152463 pod_ready.go:82] duration metric: took 5.08663ms for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.302987  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302995  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.313684  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313719  152463 pod_ready.go:82] duration metric: took 10.716576ms for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.313733  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313742  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.339570  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339604  152463 pod_ready.go:82] duration metric: took 25.849085ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.339613  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339620  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738759  152463 pod_ready.go:93] pod "kube-proxy-kwpqw" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:56.738786  152463 pod_ready.go:82] duration metric: took 399.156996ms for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738798  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:58.745103  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.623412  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.123226  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.129363  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:57.629878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.129406  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.629611  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.130209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.629354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.130004  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.629599  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.129324  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.629623  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.705336  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:01.206112  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.746646  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.748453  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.623495  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:04.623650  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.129756  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:02.630078  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:02.630168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:02.668634  152982 cri.go:89] found id: ""
	I0826 12:11:02.668665  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.668673  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:02.668680  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:02.668736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:02.707481  152982 cri.go:89] found id: ""
	I0826 12:11:02.707513  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.707524  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:02.707531  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:02.707600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:02.742412  152982 cri.go:89] found id: ""
	I0826 12:11:02.742441  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.742452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:02.742459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:02.742524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:02.783334  152982 cri.go:89] found id: ""
	I0826 12:11:02.783363  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.783374  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:02.783383  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:02.783442  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:02.819550  152982 cri.go:89] found id: ""
	I0826 12:11:02.819578  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.819586  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:02.819592  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:02.819668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:02.857381  152982 cri.go:89] found id: ""
	I0826 12:11:02.857418  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.857429  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:02.857439  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:02.857508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:02.891198  152982 cri.go:89] found id: ""
	I0826 12:11:02.891231  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.891242  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:02.891249  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:02.891328  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:02.925819  152982 cri.go:89] found id: ""
	I0826 12:11:02.925847  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.925856  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:02.925867  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:02.925881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:03.061241  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:03.061287  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:03.061306  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:03.132324  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:03.132364  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:03.176590  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:03.176623  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.229320  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:03.229366  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:05.744686  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:05.758429  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:05.758517  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:05.799162  152982 cri.go:89] found id: ""
	I0826 12:11:05.799200  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.799209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:05.799216  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:05.799270  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:05.839302  152982 cri.go:89] found id: ""
	I0826 12:11:05.839341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.839354  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:05.839362  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:05.839438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:05.900064  152982 cri.go:89] found id: ""
	I0826 12:11:05.900094  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.900102  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:05.900108  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:05.900168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:05.938314  152982 cri.go:89] found id: ""
	I0826 12:11:05.938341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.938350  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:05.938356  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:05.938423  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:05.975644  152982 cri.go:89] found id: ""
	I0826 12:11:05.975679  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.975691  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:05.975699  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:05.975775  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:06.012720  152982 cri.go:89] found id: ""
	I0826 12:11:06.012752  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.012764  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:06.012772  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:06.012848  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:06.048613  152982 cri.go:89] found id: ""
	I0826 12:11:06.048648  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.048656  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:06.048662  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:06.048717  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:06.083136  152982 cri.go:89] found id: ""
	I0826 12:11:06.083171  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.083183  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:06.083195  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:06.083213  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:06.096570  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:06.096616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:06.172561  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:06.172588  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:06.172605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:06.252039  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:06.252081  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:06.291076  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:06.291109  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.705538  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:06.203800  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:05.245839  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.744844  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.745230  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.123518  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.124421  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:08.838693  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:08.853160  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:08.853246  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:08.893024  152982 cri.go:89] found id: ""
	I0826 12:11:08.893058  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.893072  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:08.893083  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:08.893157  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:08.929621  152982 cri.go:89] found id: ""
	I0826 12:11:08.929660  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.929669  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:08.929675  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:08.929744  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:08.965488  152982 cri.go:89] found id: ""
	I0826 12:11:08.965526  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.965541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:08.965550  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:08.965622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:09.001467  152982 cri.go:89] found id: ""
	I0826 12:11:09.001503  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.001515  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:09.001525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:09.001587  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:09.037865  152982 cri.go:89] found id: ""
	I0826 12:11:09.037898  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.037907  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:09.037914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:09.037973  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:09.074537  152982 cri.go:89] found id: ""
	I0826 12:11:09.074571  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.074582  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:09.074591  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:09.074665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:09.111517  152982 cri.go:89] found id: ""
	I0826 12:11:09.111550  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.111561  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:09.111569  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:09.111635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:09.151005  152982 cri.go:89] found id: ""
	I0826 12:11:09.151039  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.151050  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:09.151062  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:09.151079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:09.231625  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:09.231666  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:09.277642  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:09.277685  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:09.326772  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:09.326814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:09.341764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:09.341802  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:09.419087  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:08.203869  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.206872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:12.703516  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.246459  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:11:10.246503  152463 pod_ready.go:82] duration metric: took 13.507695458s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:10.246520  152463 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:12.254439  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:14.752278  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.126604  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:13.622382  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:15.622915  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.920246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:11.933973  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:11.934070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:11.971020  152982 cri.go:89] found id: ""
	I0826 12:11:11.971055  152982 logs.go:276] 0 containers: []
	W0826 12:11:11.971067  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:11.971076  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:11.971147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:12.005639  152982 cri.go:89] found id: ""
	I0826 12:11:12.005669  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.005679  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:12.005687  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:12.005757  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:12.039823  152982 cri.go:89] found id: ""
	I0826 12:11:12.039856  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.039868  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:12.039877  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:12.039954  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:12.075646  152982 cri.go:89] found id: ""
	I0826 12:11:12.075690  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.075702  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:12.075710  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:12.075814  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:12.113810  152982 cri.go:89] found id: ""
	I0826 12:11:12.113838  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.113846  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:12.113852  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:12.113927  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:12.150373  152982 cri.go:89] found id: ""
	I0826 12:11:12.150405  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.150415  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:12.150421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:12.150478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:12.186325  152982 cri.go:89] found id: ""
	I0826 12:11:12.186362  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.186373  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:12.186381  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:12.186444  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:12.221346  152982 cri.go:89] found id: ""
	I0826 12:11:12.221380  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.221392  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:12.221405  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:12.221423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:12.279964  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:12.280006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:12.297102  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:12.297134  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:12.391568  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:12.391593  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:12.391608  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:12.472218  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:12.472259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.012974  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:15.026480  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:15.026553  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:15.060748  152982 cri.go:89] found id: ""
	I0826 12:11:15.060779  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.060787  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:15.060792  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:15.060842  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:15.095611  152982 cri.go:89] found id: ""
	I0826 12:11:15.095644  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.095668  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:15.095683  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:15.095759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:15.130644  152982 cri.go:89] found id: ""
	I0826 12:11:15.130681  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.130692  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:15.130700  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:15.130773  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:15.164343  152982 cri.go:89] found id: ""
	I0826 12:11:15.164375  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.164383  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:15.164391  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:15.164468  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:15.203801  152982 cri.go:89] found id: ""
	I0826 12:11:15.203835  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.203847  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:15.203855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:15.203935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:15.236428  152982 cri.go:89] found id: ""
	I0826 12:11:15.236455  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.236465  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:15.236474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:15.236546  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:15.271307  152982 cri.go:89] found id: ""
	I0826 12:11:15.271345  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.271357  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:15.271365  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:15.271449  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:15.306164  152982 cri.go:89] found id: ""
	I0826 12:11:15.306194  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.306203  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:15.306214  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:15.306228  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:15.319277  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:15.319311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:15.389821  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:15.389853  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:15.389874  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:15.466002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:15.466045  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.506591  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:15.506626  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:14.703938  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.704084  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.753630  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:19.252388  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.124351  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:20.621827  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.061033  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:18.084401  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:18.084478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:18.127327  152982 cri.go:89] found id: ""
	I0826 12:11:18.127360  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.127371  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:18.127380  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:18.127451  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:18.163215  152982 cri.go:89] found id: ""
	I0826 12:11:18.163249  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.163261  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:18.163270  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:18.163330  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:18.198205  152982 cri.go:89] found id: ""
	I0826 12:11:18.198232  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.198241  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:18.198250  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:18.198322  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:18.233245  152982 cri.go:89] found id: ""
	I0826 12:11:18.233279  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.233291  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:18.233299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:18.233366  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:18.266761  152982 cri.go:89] found id: ""
	I0826 12:11:18.266802  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.266825  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:18.266855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:18.266932  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:18.301705  152982 cri.go:89] found id: ""
	I0826 12:11:18.301744  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.301755  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:18.301764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:18.301825  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:18.339384  152982 cri.go:89] found id: ""
	I0826 12:11:18.339413  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.339422  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:18.339428  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:18.339486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:18.374435  152982 cri.go:89] found id: ""
	I0826 12:11:18.374467  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.374475  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:18.374485  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:18.374498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:18.414453  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:18.414506  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:18.468667  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:18.468712  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:18.483366  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:18.483399  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:18.554900  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:18.554930  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:18.554948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.135828  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:21.148610  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:21.148690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:21.184455  152982 cri.go:89] found id: ""
	I0826 12:11:21.184484  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.184494  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:21.184503  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:21.184572  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:21.219762  152982 cri.go:89] found id: ""
	I0826 12:11:21.219808  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.219821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:21.219829  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:21.219914  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:21.258106  152982 cri.go:89] found id: ""
	I0826 12:11:21.258136  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.258147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:21.258154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:21.258221  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:21.293698  152982 cri.go:89] found id: ""
	I0826 12:11:21.293741  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.293753  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:21.293764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:21.293841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:21.328069  152982 cri.go:89] found id: ""
	I0826 12:11:21.328101  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.328115  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:21.328123  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:21.328191  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:21.363723  152982 cri.go:89] found id: ""
	I0826 12:11:21.363757  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.363767  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:21.363776  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:21.363843  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:21.398321  152982 cri.go:89] found id: ""
	I0826 12:11:21.398349  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.398358  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:21.398364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:21.398428  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:21.434139  152982 cri.go:89] found id: ""
	I0826 12:11:21.434169  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.434177  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:21.434189  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:21.434211  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:21.488855  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:21.488900  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:21.503146  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:21.503186  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:21.576190  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:21.576212  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:21.576226  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.660280  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:21.660330  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:19.203558  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.704020  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.254119  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:23.752986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:22.622972  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.623227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.205285  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:24.219929  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:24.220056  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:24.263032  152982 cri.go:89] found id: ""
	I0826 12:11:24.263064  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.263076  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:24.263084  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:24.263154  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:24.301435  152982 cri.go:89] found id: ""
	I0826 12:11:24.301469  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.301479  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:24.301486  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:24.301557  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:24.337463  152982 cri.go:89] found id: ""
	I0826 12:11:24.337494  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.337505  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:24.337513  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:24.337589  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:24.375142  152982 cri.go:89] found id: ""
	I0826 12:11:24.375181  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.375192  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:24.375201  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:24.375277  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:24.414859  152982 cri.go:89] found id: ""
	I0826 12:11:24.414891  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.414902  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:24.414910  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:24.414980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:24.453757  152982 cri.go:89] found id: ""
	I0826 12:11:24.453801  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.453826  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:24.453836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:24.453936  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:24.489558  152982 cri.go:89] found id: ""
	I0826 12:11:24.489592  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.489601  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:24.489606  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:24.489659  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:24.525054  152982 cri.go:89] found id: ""
	I0826 12:11:24.525086  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.525097  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:24.525109  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:24.525131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:24.596120  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:24.596147  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:24.596162  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:24.671993  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:24.672040  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:24.714108  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:24.714139  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:24.764937  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:24.764979  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:23.704101  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.704765  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.759905  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:28.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.121723  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:29.122568  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.280105  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:27.293479  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:27.293569  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:27.335432  152982 cri.go:89] found id: ""
	I0826 12:11:27.335464  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.335477  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:27.335485  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:27.335565  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:27.371729  152982 cri.go:89] found id: ""
	I0826 12:11:27.371763  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.371774  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:27.371783  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:27.371857  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:27.408210  152982 cri.go:89] found id: ""
	I0826 12:11:27.408238  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.408250  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:27.408258  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:27.408327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:27.442135  152982 cri.go:89] found id: ""
	I0826 12:11:27.442170  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.442186  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:27.442196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:27.442266  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:27.476245  152982 cri.go:89] found id: ""
	I0826 12:11:27.476279  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.476290  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:27.476299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:27.476421  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:27.510917  152982 cri.go:89] found id: ""
	I0826 12:11:27.510949  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.510958  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:27.510965  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:27.511033  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:27.552891  152982 cri.go:89] found id: ""
	I0826 12:11:27.552925  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.552933  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:27.552939  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:27.552996  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:27.588303  152982 cri.go:89] found id: ""
	I0826 12:11:27.588339  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.588354  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:27.588365  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:27.588383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:27.666493  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:27.666540  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:27.710139  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:27.710176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:27.761327  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:27.761368  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:27.775628  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:27.775667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:27.851736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.351953  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:30.365614  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:30.365705  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:30.400100  152982 cri.go:89] found id: ""
	I0826 12:11:30.400130  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.400140  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:30.400148  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:30.400224  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:30.433714  152982 cri.go:89] found id: ""
	I0826 12:11:30.433746  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.433762  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:30.433770  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:30.433841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:30.467434  152982 cri.go:89] found id: ""
	I0826 12:11:30.467465  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.467475  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:30.467482  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:30.467549  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:30.501079  152982 cri.go:89] found id: ""
	I0826 12:11:30.501115  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.501128  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:30.501136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:30.501195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:30.536521  152982 cri.go:89] found id: ""
	I0826 12:11:30.536556  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.536568  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:30.536576  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:30.536649  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:30.572998  152982 cri.go:89] found id: ""
	I0826 12:11:30.573030  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.573040  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:30.573048  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:30.573116  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:30.608982  152982 cri.go:89] found id: ""
	I0826 12:11:30.609017  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.609028  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:30.609035  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:30.609110  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:30.648780  152982 cri.go:89] found id: ""
	I0826 12:11:30.648812  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.648824  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:30.648837  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:30.648853  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:30.705822  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:30.705859  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:30.719927  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:30.719956  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:30.799604  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.799633  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:30.799650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:30.876392  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:30.876438  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:28.203982  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.204105  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.703547  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.255268  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.753846  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:31.622470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.623169  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.417878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:33.431323  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:33.431416  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:33.466166  152982 cri.go:89] found id: ""
	I0826 12:11:33.466195  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.466204  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:33.466215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:33.466292  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:33.504322  152982 cri.go:89] found id: ""
	I0826 12:11:33.504351  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.504360  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:33.504367  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:33.504429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:33.542292  152982 cri.go:89] found id: ""
	I0826 12:11:33.542324  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.542332  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:33.542340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:33.542408  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:33.577794  152982 cri.go:89] found id: ""
	I0826 12:11:33.577827  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.577835  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:33.577841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:33.577901  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:33.611525  152982 cri.go:89] found id: ""
	I0826 12:11:33.611561  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.611571  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:33.611580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:33.611661  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:33.650920  152982 cri.go:89] found id: ""
	I0826 12:11:33.650954  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.650966  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:33.650974  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:33.651043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:33.688349  152982 cri.go:89] found id: ""
	I0826 12:11:33.688389  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.688401  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:33.688409  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:33.688479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:33.726501  152982 cri.go:89] found id: ""
	I0826 12:11:33.726533  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.726542  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:33.726553  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:33.726570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:33.740359  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:33.740392  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:33.810992  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:33.811018  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:33.811030  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:33.895742  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:33.895786  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:33.934059  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:33.934090  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.490917  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:36.503916  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:36.504000  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:36.539493  152982 cri.go:89] found id: ""
	I0826 12:11:36.539521  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.539529  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:36.539535  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:36.539597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:36.575517  152982 cri.go:89] found id: ""
	I0826 12:11:36.575556  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.575567  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:36.575576  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:36.575647  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:36.611750  152982 cri.go:89] found id: ""
	I0826 12:11:36.611790  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.611803  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:36.611812  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:36.611880  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:36.649512  152982 cri.go:89] found id: ""
	I0826 12:11:36.649548  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.649561  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:36.649575  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:36.649656  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:36.686741  152982 cri.go:89] found id: ""
	I0826 12:11:36.686774  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.686784  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:36.686791  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:36.686879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:35.204399  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:37.206473  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:34.753931  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.754270  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:39.253118  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.122628  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:38.122940  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:40.623071  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.723395  152982 cri.go:89] found id: ""
	I0826 12:11:36.723423  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.723431  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:36.723438  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:36.723503  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:36.761858  152982 cri.go:89] found id: ""
	I0826 12:11:36.761895  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.761906  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:36.761914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:36.761987  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:36.797265  152982 cri.go:89] found id: ""
	I0826 12:11:36.797297  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.797305  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:36.797315  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:36.797331  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.849263  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:36.849313  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:36.863273  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:36.863305  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:36.935214  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:36.935241  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:36.935259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:37.011799  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:37.011845  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.550075  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:39.563363  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:39.563441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:39.597015  152982 cri.go:89] found id: ""
	I0826 12:11:39.597049  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.597061  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:39.597068  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:39.597138  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:39.634936  152982 cri.go:89] found id: ""
	I0826 12:11:39.634976  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.634988  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:39.634996  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:39.635070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:39.670376  152982 cri.go:89] found id: ""
	I0826 12:11:39.670406  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.670414  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:39.670421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:39.670479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:39.706468  152982 cri.go:89] found id: ""
	I0826 12:11:39.706497  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.706504  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:39.706510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:39.706601  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:39.741133  152982 cri.go:89] found id: ""
	I0826 12:11:39.741166  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.741178  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:39.741187  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:39.741261  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:39.776398  152982 cri.go:89] found id: ""
	I0826 12:11:39.776436  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.776449  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:39.776460  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:39.776533  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:39.811257  152982 cri.go:89] found id: ""
	I0826 12:11:39.811291  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.811305  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:39.811314  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:39.811394  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:39.845825  152982 cri.go:89] found id: ""
	I0826 12:11:39.845858  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.845880  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:39.845893  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:39.845912  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.886439  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:39.886481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:39.936942  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:39.936985  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:39.950459  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:39.950494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:40.022791  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:40.022820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:40.022851  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:39.705276  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.705617  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.253680  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.753495  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.122503  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:45.123917  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:42.602146  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:42.615049  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:42.615124  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:42.655338  152982 cri.go:89] found id: ""
	I0826 12:11:42.655369  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.655377  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:42.655383  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:42.655438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:42.692964  152982 cri.go:89] found id: ""
	I0826 12:11:42.693001  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.693012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:42.693020  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:42.693095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:42.730011  152982 cri.go:89] found id: ""
	I0826 12:11:42.730040  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.730049  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:42.730055  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:42.730119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:42.765304  152982 cri.go:89] found id: ""
	I0826 12:11:42.765333  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.765341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:42.765348  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:42.765406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:42.805860  152982 cri.go:89] found id: ""
	I0826 12:11:42.805900  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.805912  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:42.805921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:42.805984  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:42.844736  152982 cri.go:89] found id: ""
	I0826 12:11:42.844768  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.844779  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:42.844789  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:42.844855  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:42.879760  152982 cri.go:89] found id: ""
	I0826 12:11:42.879790  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.879801  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:42.879809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:42.879873  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:42.918512  152982 cri.go:89] found id: ""
	I0826 12:11:42.918580  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.918595  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:42.918619  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:42.918640  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:42.971381  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:42.971423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:42.986027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:42.986069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:43.058511  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:43.058533  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:43.058548  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:43.137904  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:43.137948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:45.683127  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:45.697237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:45.697323  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:45.737944  152982 cri.go:89] found id: ""
	I0826 12:11:45.737977  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.737989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:45.737997  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:45.738069  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:45.775940  152982 cri.go:89] found id: ""
	I0826 12:11:45.775972  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.775980  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:45.775991  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:45.776047  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:45.811609  152982 cri.go:89] found id: ""
	I0826 12:11:45.811647  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.811658  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:45.811666  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:45.811747  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:45.845566  152982 cri.go:89] found id: ""
	I0826 12:11:45.845600  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.845612  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:45.845620  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:45.845698  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:45.880243  152982 cri.go:89] found id: ""
	I0826 12:11:45.880287  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.880300  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:45.880310  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:45.880406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:45.916121  152982 cri.go:89] found id: ""
	I0826 12:11:45.916150  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.916161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:45.916170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:45.916242  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:45.950397  152982 cri.go:89] found id: ""
	I0826 12:11:45.950430  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.950441  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:45.950449  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:45.950524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:45.987306  152982 cri.go:89] found id: ""
	I0826 12:11:45.987350  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.987363  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:45.987394  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:45.987435  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:46.044580  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:46.044632  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:46.059612  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:46.059648  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:46.133348  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:46.133377  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:46.133396  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:46.217841  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:46.217890  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:44.203535  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.703738  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.252936  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.753329  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:47.623134  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:49.628072  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.758749  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:48.772086  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:48.772172  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:48.806520  152982 cri.go:89] found id: ""
	I0826 12:11:48.806552  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.806563  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:48.806573  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:48.806655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:48.844305  152982 cri.go:89] found id: ""
	I0826 12:11:48.844335  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.844343  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:48.844349  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:48.844409  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:48.882416  152982 cri.go:89] found id: ""
	I0826 12:11:48.882453  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.882462  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:48.882469  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:48.882523  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:48.917756  152982 cri.go:89] found id: ""
	I0826 12:11:48.917798  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.917811  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:48.917818  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:48.917882  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:48.951065  152982 cri.go:89] found id: ""
	I0826 12:11:48.951095  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.951107  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:48.951115  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:48.951185  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:48.984812  152982 cri.go:89] found id: ""
	I0826 12:11:48.984845  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.984857  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:48.984865  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:48.984935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:49.021449  152982 cri.go:89] found id: ""
	I0826 12:11:49.021483  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.021495  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:49.021505  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:49.021579  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:49.053543  152982 cri.go:89] found id: ""
	I0826 12:11:49.053584  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.053596  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:49.053609  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:49.053625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:49.107227  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:49.107269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:49.121370  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:49.121402  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:49.192279  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:49.192323  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:49.192342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:49.267817  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:49.267861  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:49.204182  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.204589  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:50.753778  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.753986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.122110  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:54.122701  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.805801  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:51.821042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:51.821119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:51.863950  152982 cri.go:89] found id: ""
	I0826 12:11:51.863986  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.863999  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:51.864007  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:51.864082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:51.910582  152982 cri.go:89] found id: ""
	I0826 12:11:51.910621  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.910633  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:51.910649  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:51.910708  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:51.946964  152982 cri.go:89] found id: ""
	I0826 12:11:51.947001  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.947014  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:51.947022  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:51.947095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:51.982892  152982 cri.go:89] found id: ""
	I0826 12:11:51.982926  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.982936  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:51.982944  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:51.983016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:52.017975  152982 cri.go:89] found id: ""
	I0826 12:11:52.018000  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.018009  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:52.018015  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:52.018082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:52.053286  152982 cri.go:89] found id: ""
	I0826 12:11:52.053315  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.053323  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:52.053329  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:52.053391  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:52.088088  152982 cri.go:89] found id: ""
	I0826 12:11:52.088131  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.088144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:52.088153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:52.088235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:52.125911  152982 cri.go:89] found id: ""
	I0826 12:11:52.125938  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.125955  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:52.125967  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:52.125984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:52.167172  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:52.167208  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:52.222819  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:52.222871  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:52.237609  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:52.237650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:52.312439  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:52.312473  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:52.312491  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:54.892552  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:54.907733  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:54.907827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:54.945009  152982 cri.go:89] found id: ""
	I0826 12:11:54.945040  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.945050  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:54.945057  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:54.945128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:54.987578  152982 cri.go:89] found id: ""
	I0826 12:11:54.987608  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.987619  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:54.987627  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:54.987702  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:55.021222  152982 cri.go:89] found id: ""
	I0826 12:11:55.021254  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.021266  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:55.021274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:55.021348  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:55.058906  152982 cri.go:89] found id: ""
	I0826 12:11:55.058933  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.058941  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:55.058948  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:55.059017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:55.094689  152982 cri.go:89] found id: ""
	I0826 12:11:55.094720  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.094727  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:55.094734  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:55.094808  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:55.133269  152982 cri.go:89] found id: ""
	I0826 12:11:55.133298  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.133306  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:55.133313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:55.133376  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:55.170456  152982 cri.go:89] found id: ""
	I0826 12:11:55.170491  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.170501  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:55.170510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:55.170584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:55.205421  152982 cri.go:89] found id: ""
	I0826 12:11:55.205453  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.205463  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:55.205474  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:55.205490  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:55.258635  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:55.258672  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:55.272799  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:55.272838  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:55.345916  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:55.345948  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:55.345966  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:55.421677  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:55.421716  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:53.205479  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.703014  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.704352  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.254310  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.753129  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:56.124191  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:58.622612  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.960895  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:57.974338  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:57.974429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:58.010914  152982 cri.go:89] found id: ""
	I0826 12:11:58.010946  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.010955  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:58.010966  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:58.011046  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:58.046393  152982 cri.go:89] found id: ""
	I0826 12:11:58.046437  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.046451  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:58.046457  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:58.046512  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:58.081967  152982 cri.go:89] found id: ""
	I0826 12:11:58.081999  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.082008  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:58.082014  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:58.082074  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:58.118301  152982 cri.go:89] found id: ""
	I0826 12:11:58.118333  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.118344  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:58.118352  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:58.118420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:58.154991  152982 cri.go:89] found id: ""
	I0826 12:11:58.155022  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.155030  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:58.155036  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:58.155095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:58.192768  152982 cri.go:89] found id: ""
	I0826 12:11:58.192814  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.192827  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:58.192836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:58.192911  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:58.230393  152982 cri.go:89] found id: ""
	I0826 12:11:58.230422  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.230433  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:58.230441  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:58.230510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:58.267899  152982 cri.go:89] found id: ""
	I0826 12:11:58.267935  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.267947  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:58.267959  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:58.267976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:58.357819  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:58.357866  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:58.405641  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:58.405682  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:58.458403  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:58.458446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:58.472170  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:58.472209  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:58.544141  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.044595  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:01.059636  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:01.059732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:01.099210  152982 cri.go:89] found id: ""
	I0826 12:12:01.099244  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.099252  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:01.099260  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:01.099315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:01.135865  152982 cri.go:89] found id: ""
	I0826 12:12:01.135895  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.135904  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:01.135915  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:01.135969  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:01.169745  152982 cri.go:89] found id: ""
	I0826 12:12:01.169775  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.169784  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:01.169790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:01.169844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:01.208386  152982 cri.go:89] found id: ""
	I0826 12:12:01.208419  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.208431  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:01.208440  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:01.208508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:01.250695  152982 cri.go:89] found id: ""
	I0826 12:12:01.250727  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.250738  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:01.250746  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:01.250821  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:01.284796  152982 cri.go:89] found id: ""
	I0826 12:12:01.284825  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.284838  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:01.284845  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:01.284904  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:01.318188  152982 cri.go:89] found id: ""
	I0826 12:12:01.318219  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.318233  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:01.318242  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:01.318313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:01.354986  152982 cri.go:89] found id: ""
	I0826 12:12:01.355024  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.355036  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:01.355055  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:01.355073  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:01.406575  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:01.406625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:01.421246  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:01.421299  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:01.500127  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.500160  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:01.500178  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:01.579560  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:01.579605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:00.202896  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.204136  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:59.758855  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.253583  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:01.123695  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:03.622227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.124292  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:04.138317  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:04.138406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:04.172150  152982 cri.go:89] found id: ""
	I0826 12:12:04.172185  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.172197  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:04.172205  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:04.172281  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:04.206215  152982 cri.go:89] found id: ""
	I0826 12:12:04.206245  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.206253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:04.206259  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:04.206314  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:04.245728  152982 cri.go:89] found id: ""
	I0826 12:12:04.245766  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.245780  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:04.245797  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:04.245875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:04.288292  152982 cri.go:89] found id: ""
	I0826 12:12:04.288328  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.288341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:04.288358  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:04.288420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:04.323224  152982 cri.go:89] found id: ""
	I0826 12:12:04.323270  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.323279  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:04.323285  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:04.323353  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:04.356637  152982 cri.go:89] found id: ""
	I0826 12:12:04.356670  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.356681  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:04.356751  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:04.356829  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:04.397159  152982 cri.go:89] found id: ""
	I0826 12:12:04.397202  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.397217  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:04.397225  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:04.397307  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:04.443593  152982 cri.go:89] found id: ""
	I0826 12:12:04.443635  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.443644  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:04.443654  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:04.443667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:04.527790  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:04.527820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:04.527840  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:04.603384  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:04.603426  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:04.642782  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:04.642818  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:04.692196  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:04.692239  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:04.704890  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.204192  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.753969  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.253318  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:09.253759  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:06.123014  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:08.622705  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.208845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:07.221853  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:07.221925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:07.257184  152982 cri.go:89] found id: ""
	I0826 12:12:07.257220  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.257236  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:07.257244  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:07.257313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:07.289962  152982 cri.go:89] found id: ""
	I0826 12:12:07.290000  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.290012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:07.290018  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:07.290082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:07.323408  152982 cri.go:89] found id: ""
	I0826 12:12:07.323440  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.323452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:07.323461  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:07.323527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:07.358324  152982 cri.go:89] found id: ""
	I0826 12:12:07.358353  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.358362  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:07.358368  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:07.358436  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:07.393608  152982 cri.go:89] found id: ""
	I0826 12:12:07.393657  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.393666  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:07.393671  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:07.393739  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:07.427738  152982 cri.go:89] found id: ""
	I0826 12:12:07.427772  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.427782  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:07.427790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:07.427879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:07.466467  152982 cri.go:89] found id: ""
	I0826 12:12:07.466508  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.466520  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:07.466528  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:07.466603  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:07.501589  152982 cri.go:89] found id: ""
	I0826 12:12:07.501630  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.501645  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:07.501658  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:07.501678  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:07.550668  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:07.550708  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:07.564191  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:07.564224  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:07.638593  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:07.638626  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:07.638645  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:07.722262  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:07.722311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:10.265369  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:10.278719  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:10.278807  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:10.314533  152982 cri.go:89] found id: ""
	I0826 12:12:10.314568  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.314581  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:10.314589  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:10.314664  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:10.355983  152982 cri.go:89] found id: ""
	I0826 12:12:10.356014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.356023  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:10.356029  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:10.356091  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:10.391815  152982 cri.go:89] found id: ""
	I0826 12:12:10.391850  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.391860  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:10.391867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:10.391933  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:10.430280  152982 cri.go:89] found id: ""
	I0826 12:12:10.430309  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.430318  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:10.430324  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:10.430383  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:10.467983  152982 cri.go:89] found id: ""
	I0826 12:12:10.468014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.468025  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:10.468034  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:10.468103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:10.501682  152982 cri.go:89] found id: ""
	I0826 12:12:10.501712  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.501720  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:10.501726  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:10.501779  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:10.536760  152982 cri.go:89] found id: ""
	I0826 12:12:10.536790  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.536802  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:10.536810  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:10.536885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:10.572626  152982 cri.go:89] found id: ""
	I0826 12:12:10.572663  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.572677  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:10.572690  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:10.572707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:10.628207  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:10.628242  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:10.641767  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:10.641799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:10.716431  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:10.716463  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:10.716481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:10.801367  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:10.801416  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:09.205156  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.704152  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.754090  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:14.253111  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.122118  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.123368  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:15.623046  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.346625  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:13.359838  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:13.359925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:13.393199  152982 cri.go:89] found id: ""
	I0826 12:12:13.393228  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.393241  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:13.393249  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:13.393321  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:13.429651  152982 cri.go:89] found id: ""
	I0826 12:12:13.429696  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.429709  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:13.429718  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:13.429778  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:13.463913  152982 cri.go:89] found id: ""
	I0826 12:12:13.463947  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.463959  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:13.463967  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:13.464035  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:13.498933  152982 cri.go:89] found id: ""
	I0826 12:12:13.498966  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.498977  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:13.498987  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:13.499064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:13.535136  152982 cri.go:89] found id: ""
	I0826 12:12:13.535166  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.535177  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:13.535185  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:13.535260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:13.573468  152982 cri.go:89] found id: ""
	I0826 12:12:13.573504  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.573516  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:13.573525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:13.573597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:13.612852  152982 cri.go:89] found id: ""
	I0826 12:12:13.612900  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.612913  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:13.612921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:13.612994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:13.649176  152982 cri.go:89] found id: ""
	I0826 12:12:13.649204  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.649220  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:13.649230  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:13.649247  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:13.663880  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:13.663908  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:13.741960  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:13.741982  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:13.741999  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:13.829286  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:13.829342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:13.868186  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:13.868218  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.422802  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:16.436680  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:16.436759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:16.471551  152982 cri.go:89] found id: ""
	I0826 12:12:16.471585  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.471605  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:16.471623  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:16.471695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:16.507468  152982 cri.go:89] found id: ""
	I0826 12:12:16.507504  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.507517  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:16.507526  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:16.507600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:16.542283  152982 cri.go:89] found id: ""
	I0826 12:12:16.542314  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.542325  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:16.542336  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:16.542406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:16.590390  152982 cri.go:89] found id: ""
	I0826 12:12:16.590429  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.590443  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:16.590452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:16.590593  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:16.625344  152982 cri.go:89] found id: ""
	I0826 12:12:16.625371  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.625382  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:16.625389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:16.625463  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:16.660153  152982 cri.go:89] found id: ""
	I0826 12:12:16.660194  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.660204  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:16.660211  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:16.660268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:16.696541  152982 cri.go:89] found id: ""
	I0826 12:12:16.696572  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.696580  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:16.696586  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:16.696655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:14.202993  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.204125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.255066  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:18.752641  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:17.624099  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.122254  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.732416  152982 cri.go:89] found id: ""
	I0826 12:12:16.732448  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.732456  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:16.732469  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:16.732486  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:16.809058  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:16.809106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:16.848200  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:16.848269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.904985  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:16.905033  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:16.918966  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:16.919000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:16.989371  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.490349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:19.502851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:19.502946  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:19.534939  152982 cri.go:89] found id: ""
	I0826 12:12:19.534966  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.534974  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:19.534981  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:19.535036  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:19.567128  152982 cri.go:89] found id: ""
	I0826 12:12:19.567161  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.567177  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:19.567185  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:19.567257  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:19.601548  152982 cri.go:89] found id: ""
	I0826 12:12:19.601580  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.601590  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:19.601598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:19.601670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:19.636903  152982 cri.go:89] found id: ""
	I0826 12:12:19.636930  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.636938  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:19.636949  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:19.637018  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:19.670155  152982 cri.go:89] found id: ""
	I0826 12:12:19.670181  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.670190  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:19.670196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:19.670258  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:19.705052  152982 cri.go:89] found id: ""
	I0826 12:12:19.705079  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.705090  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:19.705099  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:19.705163  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:19.744106  152982 cri.go:89] found id: ""
	I0826 12:12:19.744136  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.744144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:19.744151  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:19.744227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:19.780078  152982 cri.go:89] found id: ""
	I0826 12:12:19.780107  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.780116  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:19.780126  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:19.780138  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:19.831821  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:19.831884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:19.847572  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:19.847610  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:19.924723  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.924745  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:19.924763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:20.001249  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:20.001292  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:18.204529  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.205670  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.703658  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.753284  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.753357  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.122490  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:24.122773  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.540357  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:22.554408  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:22.554483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:22.588270  152982 cri.go:89] found id: ""
	I0826 12:12:22.588298  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.588310  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:22.588329  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:22.588411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:22.623979  152982 cri.go:89] found id: ""
	I0826 12:12:22.624003  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.624011  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:22.624016  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:22.624077  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:22.657151  152982 cri.go:89] found id: ""
	I0826 12:12:22.657185  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.657196  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:22.657204  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:22.657265  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:22.694187  152982 cri.go:89] found id: ""
	I0826 12:12:22.694217  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.694229  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:22.694237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:22.694327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:22.734911  152982 cri.go:89] found id: ""
	I0826 12:12:22.734948  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.734960  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:22.734968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:22.735039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:22.772754  152982 cri.go:89] found id: ""
	I0826 12:12:22.772790  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.772802  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:22.772809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:22.772877  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:22.810340  152982 cri.go:89] found id: ""
	I0826 12:12:22.810376  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.810385  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:22.810392  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:22.810467  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:22.847910  152982 cri.go:89] found id: ""
	I0826 12:12:22.847942  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.847953  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:22.847966  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:22.847984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:22.900871  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:22.900927  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:22.914758  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:22.914790  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:22.981736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:22.981766  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:22.981780  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:23.062669  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:23.062717  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:25.604600  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:25.617474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:25.617584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:25.653870  152982 cri.go:89] found id: ""
	I0826 12:12:25.653904  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.653917  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:25.653925  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:25.653993  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:25.693937  152982 cri.go:89] found id: ""
	I0826 12:12:25.693965  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.693973  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:25.693979  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:25.694039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:25.730590  152982 cri.go:89] found id: ""
	I0826 12:12:25.730622  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.730633  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:25.730640  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:25.730729  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:25.768192  152982 cri.go:89] found id: ""
	I0826 12:12:25.768221  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.768231  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:25.768240  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:25.768296  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:25.808518  152982 cri.go:89] found id: ""
	I0826 12:12:25.808545  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.808553  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:25.808559  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:25.808622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:25.843434  152982 cri.go:89] found id: ""
	I0826 12:12:25.843464  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.843475  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:25.843487  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:25.843561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:25.879093  152982 cri.go:89] found id: ""
	I0826 12:12:25.879124  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.879138  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:25.879146  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:25.879212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:25.915871  152982 cri.go:89] found id: ""
	I0826 12:12:25.915919  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.915932  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:25.915945  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:25.915973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:25.998597  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:25.998652  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:26.038701  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:26.038736  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:26.091618  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:26.091665  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:26.105349  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:26.105383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:26.178337  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:24.704209  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.204036  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:25.253322  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.754717  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:26.123520  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.622019  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.622453  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.679177  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:28.695361  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:28.695455  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:28.734977  152982 cri.go:89] found id: ""
	I0826 12:12:28.735008  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.735026  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:28.735032  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:28.735107  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:28.771634  152982 cri.go:89] found id: ""
	I0826 12:12:28.771665  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.771677  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:28.771685  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:28.771763  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:28.810976  152982 cri.go:89] found id: ""
	I0826 12:12:28.811010  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.811022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:28.811030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:28.811098  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:28.850204  152982 cri.go:89] found id: ""
	I0826 12:12:28.850233  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.850241  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:28.850247  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:28.850300  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:28.888814  152982 cri.go:89] found id: ""
	I0826 12:12:28.888845  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.888852  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:28.888862  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:28.888923  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:28.925203  152982 cri.go:89] found id: ""
	I0826 12:12:28.925251  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.925264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:28.925273  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:28.925359  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:28.963656  152982 cri.go:89] found id: ""
	I0826 12:12:28.963684  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.963700  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:28.963706  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:28.963761  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:28.997644  152982 cri.go:89] found id: ""
	I0826 12:12:28.997677  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.997686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:28.997696  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:28.997711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:29.036668  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:29.036711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:29.089020  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:29.089064  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:29.103051  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:29.103083  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:29.173327  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:29.173363  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:29.173380  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:29.703493  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.709124  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.252850  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:32.254087  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:33.121656  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:35.122979  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.755073  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:31.769098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:31.769194  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:31.811919  152982 cri.go:89] found id: ""
	I0826 12:12:31.811950  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.811970  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:31.811978  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:31.812059  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:31.849728  152982 cri.go:89] found id: ""
	I0826 12:12:31.849760  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.849771  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:31.849778  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:31.849844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:31.884973  152982 cri.go:89] found id: ""
	I0826 12:12:31.885013  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.885022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:31.885030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:31.885088  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:31.925013  152982 cri.go:89] found id: ""
	I0826 12:12:31.925043  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.925052  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:31.925060  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:31.925121  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:31.960066  152982 cri.go:89] found id: ""
	I0826 12:12:31.960101  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.960112  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:31.960130  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:31.960205  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:31.994706  152982 cri.go:89] found id: ""
	I0826 12:12:31.994739  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.994747  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:31.994753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:31.994810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:32.030101  152982 cri.go:89] found id: ""
	I0826 12:12:32.030134  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.030142  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:32.030148  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:32.030213  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:32.064056  152982 cri.go:89] found id: ""
	I0826 12:12:32.064087  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.064095  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:32.064105  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:32.064118  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:32.115930  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:32.115974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:32.144522  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:32.144594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:32.216857  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:32.216886  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:32.216946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:32.293229  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:32.293268  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.833049  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:34.846325  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:34.846389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:34.879253  152982 cri.go:89] found id: ""
	I0826 12:12:34.879282  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.879299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:34.879308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:34.879377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:34.913351  152982 cri.go:89] found id: ""
	I0826 12:12:34.913381  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.913393  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:34.913401  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:34.913487  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:34.946929  152982 cri.go:89] found id: ""
	I0826 12:12:34.946958  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.946966  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:34.946972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:34.947040  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:34.980517  152982 cri.go:89] found id: ""
	I0826 12:12:34.980559  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.980571  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:34.980580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:34.980651  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:35.015853  152982 cri.go:89] found id: ""
	I0826 12:12:35.015886  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.015894  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:35.015909  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:35.015972  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:35.053568  152982 cri.go:89] found id: ""
	I0826 12:12:35.053597  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.053606  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:35.053613  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:35.053667  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:35.091369  152982 cri.go:89] found id: ""
	I0826 12:12:35.091398  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.091408  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:35.091415  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:35.091483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:35.129233  152982 cri.go:89] found id: ""
	I0826 12:12:35.129259  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.129267  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:35.129276  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:35.129288  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:35.181977  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:35.182016  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:35.195780  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:35.195812  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:35.274390  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:35.274416  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:35.274433  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:35.353774  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:35.353819  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.203244  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:36.703229  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:34.754010  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.253336  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.253674  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.622257  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.622967  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.894664  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:37.908390  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:37.908480  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:37.943642  152982 cri.go:89] found id: ""
	I0826 12:12:37.943669  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.943681  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:37.943689  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:37.943759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:37.978371  152982 cri.go:89] found id: ""
	I0826 12:12:37.978407  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.978418  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:37.978426  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:37.978497  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:38.014205  152982 cri.go:89] found id: ""
	I0826 12:12:38.014237  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.014248  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:38.014255  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:38.014326  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:38.048705  152982 cri.go:89] found id: ""
	I0826 12:12:38.048737  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.048748  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:38.048758  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:38.048824  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:38.085009  152982 cri.go:89] found id: ""
	I0826 12:12:38.085039  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.085050  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:38.085058  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:38.085147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:38.125923  152982 cri.go:89] found id: ""
	I0826 12:12:38.125949  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.125960  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:38.125968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:38.126038  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:38.161460  152982 cri.go:89] found id: ""
	I0826 12:12:38.161492  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.161504  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:38.161512  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:38.161584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:38.194433  152982 cri.go:89] found id: ""
	I0826 12:12:38.194462  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.194472  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:38.194481  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:38.194494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.245809  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:38.245854  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:38.261100  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:38.261141  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:38.329187  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:38.329218  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:38.329237  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:38.416798  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:38.416844  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:40.962763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:40.976214  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:40.976287  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:41.010312  152982 cri.go:89] found id: ""
	I0826 12:12:41.010346  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.010356  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:41.010363  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:41.010422  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:41.051708  152982 cri.go:89] found id: ""
	I0826 12:12:41.051738  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.051746  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:41.051753  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:41.051818  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:41.087107  152982 cri.go:89] found id: ""
	I0826 12:12:41.087140  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.087152  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:41.087161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:41.087238  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:41.125099  152982 cri.go:89] found id: ""
	I0826 12:12:41.125132  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.125144  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:41.125153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:41.125216  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:41.160192  152982 cri.go:89] found id: ""
	I0826 12:12:41.160220  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.160227  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:41.160234  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:41.160291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:41.193507  152982 cri.go:89] found id: ""
	I0826 12:12:41.193536  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.193548  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:41.193557  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:41.193650  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:41.235788  152982 cri.go:89] found id: ""
	I0826 12:12:41.235827  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.235835  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:41.235841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:41.235897  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:41.271720  152982 cri.go:89] found id: ""
	I0826 12:12:41.271755  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.271770  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:41.271780  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:41.271794  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:41.285694  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:41.285731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:41.351221  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:41.351245  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:41.351261  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:41.434748  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:41.434792  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:41.472446  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:41.472477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.704389  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.204525  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.752919  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:43.753710  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:42.123210  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.623786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.022222  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:44.036128  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:44.036201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:44.071142  152982 cri.go:89] found id: ""
	I0826 12:12:44.071177  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.071187  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:44.071196  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:44.071267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:44.105068  152982 cri.go:89] found id: ""
	I0826 12:12:44.105101  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.105110  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:44.105116  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:44.105184  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:44.140069  152982 cri.go:89] found id: ""
	I0826 12:12:44.140102  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.140113  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:44.140121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:44.140190  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:44.177686  152982 cri.go:89] found id: ""
	I0826 12:12:44.177724  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.177736  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:44.177744  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:44.177819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:44.214326  152982 cri.go:89] found id: ""
	I0826 12:12:44.214356  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.214364  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:44.214371  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:44.214426  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:44.251675  152982 cri.go:89] found id: ""
	I0826 12:12:44.251703  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.251711  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:44.251718  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:44.251776  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:44.303077  152982 cri.go:89] found id: ""
	I0826 12:12:44.303107  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.303116  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:44.303122  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:44.303183  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:44.355913  152982 cri.go:89] found id: ""
	I0826 12:12:44.355944  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.355952  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:44.355962  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:44.355974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:44.421610  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:44.421653  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:44.435567  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:44.435603  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:44.501406  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:44.501427  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:44.501440  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:44.582723  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:44.582763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:43.703519  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.202958  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.253330  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:48.753043  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.122857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:49.621786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.124026  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:47.139183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:47.139260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:47.175395  152982 cri.go:89] found id: ""
	I0826 12:12:47.175424  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.175440  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:47.175450  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:47.175514  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:47.214536  152982 cri.go:89] found id: ""
	I0826 12:12:47.214568  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.214580  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:47.214588  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:47.214655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:47.255297  152982 cri.go:89] found id: ""
	I0826 12:12:47.255321  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.255329  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:47.255335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:47.255402  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:47.290638  152982 cri.go:89] found id: ""
	I0826 12:12:47.290666  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.290675  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:47.290681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:47.290736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:47.327313  152982 cri.go:89] found id: ""
	I0826 12:12:47.327345  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.327352  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:47.327359  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:47.327425  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:47.366221  152982 cri.go:89] found id: ""
	I0826 12:12:47.366256  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.366264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:47.366274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:47.366331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:47.401043  152982 cri.go:89] found id: ""
	I0826 12:12:47.401077  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.401088  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:47.401095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:47.401166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:47.435800  152982 cri.go:89] found id: ""
	I0826 12:12:47.435837  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.435848  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:47.435860  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:47.435881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:47.487917  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:47.487955  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:47.501696  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:47.501731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:47.569026  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:47.569053  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:47.569069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:47.651002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:47.651049  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:50.192329  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:50.213937  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:50.214017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:50.253835  152982 cri.go:89] found id: ""
	I0826 12:12:50.253868  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.253879  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:50.253890  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:50.253957  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:50.296898  152982 cri.go:89] found id: ""
	I0826 12:12:50.296928  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.296939  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:50.296946  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:50.297016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:50.350327  152982 cri.go:89] found id: ""
	I0826 12:12:50.350356  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.350365  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:50.350375  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:50.350443  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:50.385191  152982 cri.go:89] found id: ""
	I0826 12:12:50.385225  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.385236  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:50.385243  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:50.385309  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:50.418371  152982 cri.go:89] found id: ""
	I0826 12:12:50.418412  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.418423  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:50.418432  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:50.418505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:50.450924  152982 cri.go:89] found id: ""
	I0826 12:12:50.450956  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.450965  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:50.450972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:50.451043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:50.485695  152982 cri.go:89] found id: ""
	I0826 12:12:50.485728  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.485739  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:50.485748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:50.485819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:50.519570  152982 cri.go:89] found id: ""
	I0826 12:12:50.519609  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.519622  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:50.519633  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:50.519650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:50.572959  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:50.573001  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:50.586794  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:50.586826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:50.654148  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:50.654180  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:50.654255  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:50.738067  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:50.738107  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:48.203018  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.205528  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.704054  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.758038  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.252772  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.121906  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:54.622553  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.281246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:53.296023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:53.296103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:53.333031  152982 cri.go:89] found id: ""
	I0826 12:12:53.333073  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.333092  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:53.333100  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:53.333171  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:53.367753  152982 cri.go:89] found id: ""
	I0826 12:12:53.367782  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.367791  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:53.367796  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:53.367849  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:53.403702  152982 cri.go:89] found id: ""
	I0826 12:12:53.403733  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.403745  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:53.403753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:53.403823  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:53.439911  152982 cri.go:89] found id: ""
	I0826 12:12:53.439939  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.439947  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:53.439953  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:53.440008  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:53.475053  152982 cri.go:89] found id: ""
	I0826 12:12:53.475079  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.475088  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:53.475094  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:53.475152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:53.509087  152982 cri.go:89] found id: ""
	I0826 12:12:53.509117  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.509128  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:53.509136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:53.509207  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:53.546090  152982 cri.go:89] found id: ""
	I0826 12:12:53.546123  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.546133  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:53.546139  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:53.546195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:53.581675  152982 cri.go:89] found id: ""
	I0826 12:12:53.581713  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.581727  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:53.581741  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:53.581756  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:53.632866  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:53.632929  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:53.646045  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:53.646079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:53.716768  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:53.716798  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:53.716814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:53.799490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:53.799541  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.340389  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:56.353305  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:56.353377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:56.389690  152982 cri.go:89] found id: ""
	I0826 12:12:56.389725  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.389733  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:56.389741  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:56.389797  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:56.423214  152982 cri.go:89] found id: ""
	I0826 12:12:56.423245  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.423253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:56.423260  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:56.423315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:56.459033  152982 cri.go:89] found id: ""
	I0826 12:12:56.459069  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.459077  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:56.459083  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:56.459141  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:56.494408  152982 cri.go:89] found id: ""
	I0826 12:12:56.494437  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.494446  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:56.494453  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:56.494507  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:56.533471  152982 cri.go:89] found id: ""
	I0826 12:12:56.533506  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.533517  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:56.533525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:56.533595  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:56.572644  152982 cri.go:89] found id: ""
	I0826 12:12:56.572675  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.572685  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:56.572690  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:56.572769  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:56.610948  152982 cri.go:89] found id: ""
	I0826 12:12:56.610978  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.610989  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:56.610997  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:56.611161  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:56.651352  152982 cri.go:89] found id: ""
	I0826 12:12:56.651391  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.651406  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:56.651419  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:56.651446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:56.666627  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:56.666664  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 12:12:54.704640  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:56.704830  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:55.253572  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.754403  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.122603  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.623004  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	W0826 12:12:56.741054  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:56.741087  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:56.741106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:56.818138  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:56.818194  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.858182  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:56.858216  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.412428  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:59.426340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:59.426410  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:59.459975  152982 cri.go:89] found id: ""
	I0826 12:12:59.460011  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.460021  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:59.460027  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:59.460082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:59.491890  152982 cri.go:89] found id: ""
	I0826 12:12:59.491918  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.491928  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:59.491934  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:59.491994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:59.527284  152982 cri.go:89] found id: ""
	I0826 12:12:59.527318  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.527330  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:59.527339  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:59.527411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:59.560996  152982 cri.go:89] found id: ""
	I0826 12:12:59.561027  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.561036  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:59.561042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:59.561096  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:59.595827  152982 cri.go:89] found id: ""
	I0826 12:12:59.595858  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.595866  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:59.595882  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:59.595970  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:59.632943  152982 cri.go:89] found id: ""
	I0826 12:12:59.632981  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.632993  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:59.633001  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:59.633071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:59.669013  152982 cri.go:89] found id: ""
	I0826 12:12:59.669047  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.669057  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:59.669065  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:59.669139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:59.703286  152982 cri.go:89] found id: ""
	I0826 12:12:59.703320  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.703331  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:59.703342  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:59.703359  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.756848  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:59.756882  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:59.770551  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:59.770592  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:59.842153  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:59.842176  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:59.842190  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:59.925190  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:59.925231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:59.203898  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.703960  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.755160  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.252684  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.253046  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.623605  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.122069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.464977  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:02.478901  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:02.478991  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:02.514845  152982 cri.go:89] found id: ""
	I0826 12:13:02.514890  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.514903  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:02.514912  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:02.514980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:02.550867  152982 cri.go:89] found id: ""
	I0826 12:13:02.550899  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.550910  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:02.550918  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:02.550988  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:02.585494  152982 cri.go:89] found id: ""
	I0826 12:13:02.585522  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.585531  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:02.585537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:02.585617  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:02.623561  152982 cri.go:89] found id: ""
	I0826 12:13:02.623603  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.623619  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:02.623630  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:02.623696  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:02.660636  152982 cri.go:89] found id: ""
	I0826 12:13:02.660665  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.660675  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:02.660683  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:02.660760  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:02.696140  152982 cri.go:89] found id: ""
	I0826 12:13:02.696173  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.696184  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:02.696192  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:02.696260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:02.735056  152982 cri.go:89] found id: ""
	I0826 12:13:02.735098  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.735111  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:02.735121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:02.735201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:02.770841  152982 cri.go:89] found id: ""
	I0826 12:13:02.770886  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.770899  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:02.770911  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:02.770928  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:02.845458  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:02.845498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:02.885537  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:02.885574  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:02.935507  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:02.935560  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:02.950010  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:02.950046  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:03.018963  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.520071  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:05.535473  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:05.535554  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:05.572890  152982 cri.go:89] found id: ""
	I0826 12:13:05.572923  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.572934  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:05.572942  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:05.573019  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:05.610469  152982 cri.go:89] found id: ""
	I0826 12:13:05.610503  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.610515  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:05.610522  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:05.610586  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:05.647446  152982 cri.go:89] found id: ""
	I0826 12:13:05.647480  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.647489  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:05.647495  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:05.647561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:05.686619  152982 cri.go:89] found id: ""
	I0826 12:13:05.686660  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.686672  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:05.686681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:05.686754  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:05.725893  152982 cri.go:89] found id: ""
	I0826 12:13:05.725927  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.725936  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:05.725943  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:05.726034  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:05.761052  152982 cri.go:89] found id: ""
	I0826 12:13:05.761081  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.761089  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:05.761095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:05.761147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:05.795336  152982 cri.go:89] found id: ""
	I0826 12:13:05.795367  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.795379  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:05.795387  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:05.795447  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:05.834397  152982 cri.go:89] found id: ""
	I0826 12:13:05.834441  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.834449  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:05.834459  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:05.834472  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:05.847882  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:05.847919  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:05.921941  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.921965  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:05.921982  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:06.001380  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:06.001424  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:06.040519  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:06.040555  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:04.203987  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.704484  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.752615  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.753340  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.122654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.122742  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.123434  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.591761  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:08.604628  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:08.604724  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:08.639915  152982 cri.go:89] found id: ""
	I0826 12:13:08.639948  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.639957  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:08.639963  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:08.640025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:08.684479  152982 cri.go:89] found id: ""
	I0826 12:13:08.684513  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.684526  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:08.684535  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:08.684613  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:08.724083  152982 cri.go:89] found id: ""
	I0826 12:13:08.724112  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.724121  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:08.724127  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:08.724182  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:08.760781  152982 cri.go:89] found id: ""
	I0826 12:13:08.760830  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.760842  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:08.760851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:08.760943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:08.795685  152982 cri.go:89] found id: ""
	I0826 12:13:08.795715  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.795723  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:08.795730  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:08.795786  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:08.832123  152982 cri.go:89] found id: ""
	I0826 12:13:08.832152  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.832161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:08.832167  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:08.832227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:08.869701  152982 cri.go:89] found id: ""
	I0826 12:13:08.869735  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.869752  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:08.869760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:08.869827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:08.905399  152982 cri.go:89] found id: ""
	I0826 12:13:08.905444  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.905455  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:08.905469  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:08.905485  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:08.956814  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:08.956857  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:08.971618  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:08.971656  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:09.039360  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:09.039389  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:09.039407  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:09.113464  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:09.113509  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:11.658989  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:11.671816  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:11.671898  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:11.707124  152982 cri.go:89] found id: ""
	I0826 12:13:11.707150  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.707158  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:11.707165  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:11.707230  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:09.203816  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.203914  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.757254  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:13.252482  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:12.624138  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.123672  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.743127  152982 cri.go:89] found id: ""
	I0826 12:13:11.743155  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.743163  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:11.743169  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:11.743249  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:11.777664  152982 cri.go:89] found id: ""
	I0826 12:13:11.777693  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.777701  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:11.777707  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:11.777766  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:11.811555  152982 cri.go:89] found id: ""
	I0826 12:13:11.811585  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.811593  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:11.811599  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:11.811658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:11.846187  152982 cri.go:89] found id: ""
	I0826 12:13:11.846216  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.846223  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:11.846229  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:11.846291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:11.882261  152982 cri.go:89] found id: ""
	I0826 12:13:11.882292  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.882310  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:11.882318  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:11.882386  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:11.920538  152982 cri.go:89] found id: ""
	I0826 12:13:11.920572  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.920583  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:11.920590  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:11.920658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:11.955402  152982 cri.go:89] found id: ""
	I0826 12:13:11.955435  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.955446  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:11.955456  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:11.955473  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:12.007676  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:12.007723  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:12.021378  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:12.021417  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:12.087841  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:12.087868  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:12.087883  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:12.170948  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:12.170991  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:14.712383  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:14.724904  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:14.724982  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:14.759675  152982 cri.go:89] found id: ""
	I0826 12:13:14.759703  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.759711  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:14.759717  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:14.759784  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:14.794440  152982 cri.go:89] found id: ""
	I0826 12:13:14.794471  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.794480  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:14.794488  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:14.794542  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:14.832392  152982 cri.go:89] found id: ""
	I0826 12:13:14.832442  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.832452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:14.832459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:14.832524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:14.870231  152982 cri.go:89] found id: ""
	I0826 12:13:14.870262  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.870273  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:14.870281  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:14.870339  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:14.909480  152982 cri.go:89] found id: ""
	I0826 12:13:14.909517  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.909529  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:14.909536  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:14.909596  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:14.950957  152982 cri.go:89] found id: ""
	I0826 12:13:14.950986  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.950997  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:14.951005  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:14.951071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:14.995137  152982 cri.go:89] found id: ""
	I0826 12:13:14.995165  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.995176  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:14.995183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:14.995252  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:15.029939  152982 cri.go:89] found id: ""
	I0826 12:13:15.029969  152982 logs.go:276] 0 containers: []
	W0826 12:13:15.029978  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:15.029987  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:15.030000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:15.106633  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:15.106675  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:15.152575  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:15.152613  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:15.205645  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:15.205689  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:15.220325  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:15.220369  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:15.289698  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:13.705307  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:16.203733  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.253098  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.253276  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.752313  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.621549  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.622504  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.790709  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:17.804332  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:17.804398  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:17.839735  152982 cri.go:89] found id: ""
	I0826 12:13:17.839779  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.839791  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:17.839803  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:17.839885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:17.875476  152982 cri.go:89] found id: ""
	I0826 12:13:17.875510  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.875521  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:17.875529  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:17.875606  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:17.911715  152982 cri.go:89] found id: ""
	I0826 12:13:17.911745  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.911753  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:17.911760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:17.911822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:17.949059  152982 cri.go:89] found id: ""
	I0826 12:13:17.949094  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.949102  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:17.949109  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:17.949166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:17.985319  152982 cri.go:89] found id: ""
	I0826 12:13:17.985365  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.985376  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:17.985385  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:17.985481  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:18.019796  152982 cri.go:89] found id: ""
	I0826 12:13:18.019839  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.019858  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:18.019867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:18.019931  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:18.053910  152982 cri.go:89] found id: ""
	I0826 12:13:18.053941  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.053953  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:18.053960  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:18.054039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:18.089854  152982 cri.go:89] found id: ""
	I0826 12:13:18.089888  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.089901  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:18.089917  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:18.089934  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:18.143026  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:18.143070  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.156710  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:18.156740  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:18.222894  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:18.222929  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:18.222946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:18.298729  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:18.298777  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:20.837506  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:20.851070  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:20.851152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:20.886253  152982 cri.go:89] found id: ""
	I0826 12:13:20.886289  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.886299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:20.886308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:20.886384  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:20.923773  152982 cri.go:89] found id: ""
	I0826 12:13:20.923803  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.923821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:20.923827  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:20.923884  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:20.959117  152982 cri.go:89] found id: ""
	I0826 12:13:20.959151  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.959162  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:20.959170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:20.959239  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:20.994088  152982 cri.go:89] found id: ""
	I0826 12:13:20.994121  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.994131  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:20.994138  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:20.994203  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:21.031140  152982 cri.go:89] found id: ""
	I0826 12:13:21.031171  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.031183  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:21.031198  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:21.031267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:21.064624  152982 cri.go:89] found id: ""
	I0826 12:13:21.064654  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.064666  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:21.064674  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:21.064743  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:21.100146  152982 cri.go:89] found id: ""
	I0826 12:13:21.100182  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.100190  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:21.100197  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:21.100268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:21.149001  152982 cri.go:89] found id: ""
	I0826 12:13:21.149031  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.149040  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:21.149054  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:21.149074  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:21.229783  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:21.229809  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:21.229826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:21.305579  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:21.305619  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:21.343856  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:21.343884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:21.394183  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:21.394231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.205132  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:20.704261  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:21.754167  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.253321  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:22.123356  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.621337  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:23.908368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:23.922748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:23.922840  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:23.964168  152982 cri.go:89] found id: ""
	I0826 12:13:23.964199  152982 logs.go:276] 0 containers: []
	W0826 12:13:23.964209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:23.964218  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:23.964290  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:24.001156  152982 cri.go:89] found id: ""
	I0826 12:13:24.001186  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.001199  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:24.001204  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:24.001268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:24.040001  152982 cri.go:89] found id: ""
	I0826 12:13:24.040037  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.040057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:24.040067  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:24.040139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:24.076901  152982 cri.go:89] found id: ""
	I0826 12:13:24.076940  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.076948  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:24.076955  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:24.077028  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:24.129347  152982 cri.go:89] found id: ""
	I0826 12:13:24.129375  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.129383  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:24.129389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:24.129457  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:24.169634  152982 cri.go:89] found id: ""
	I0826 12:13:24.169666  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.169678  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:24.169685  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:24.169740  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:24.206976  152982 cri.go:89] found id: ""
	I0826 12:13:24.207006  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.207015  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:24.207023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:24.207092  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:24.243755  152982 cri.go:89] found id: ""
	I0826 12:13:24.243790  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.243800  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:24.243812  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:24.243829  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:24.323085  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:24.323131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:24.362404  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:24.362436  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:24.411863  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:24.411910  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:24.425742  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:24.425776  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:24.492510  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:23.203855  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:25.704793  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.753722  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:28.753791  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.622857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:29.122053  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.993510  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:27.007233  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:27.007304  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:27.041360  152982 cri.go:89] found id: ""
	I0826 12:13:27.041392  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.041401  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:27.041407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:27.041470  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:27.076040  152982 cri.go:89] found id: ""
	I0826 12:13:27.076069  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.076080  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:27.076088  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:27.076160  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:27.114288  152982 cri.go:89] found id: ""
	I0826 12:13:27.114325  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.114336  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:27.114345  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:27.114418  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:27.148538  152982 cri.go:89] found id: ""
	I0826 12:13:27.148572  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.148582  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:27.148588  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:27.148665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:27.182331  152982 cri.go:89] found id: ""
	I0826 12:13:27.182362  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.182373  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:27.182382  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:27.182453  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:27.218645  152982 cri.go:89] found id: ""
	I0826 12:13:27.218698  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.218710  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:27.218720  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:27.218798  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:27.254987  152982 cri.go:89] found id: ""
	I0826 12:13:27.255021  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.255031  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:27.255037  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:27.255097  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:27.289633  152982 cri.go:89] found id: ""
	I0826 12:13:27.289662  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.289672  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:27.289683  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:27.289705  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:27.338387  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:27.338429  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:27.353764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:27.353799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:27.425833  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:27.425855  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:27.425870  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:27.507035  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:27.507078  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.047763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:30.063283  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:30.063382  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:30.100161  152982 cri.go:89] found id: ""
	I0826 12:13:30.100194  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.100207  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:30.100215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:30.100276  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:30.136507  152982 cri.go:89] found id: ""
	I0826 12:13:30.136542  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.136554  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:30.136561  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:30.136632  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:30.170023  152982 cri.go:89] found id: ""
	I0826 12:13:30.170058  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.170066  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:30.170071  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:30.170128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:30.204979  152982 cri.go:89] found id: ""
	I0826 12:13:30.205022  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.205032  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:30.205062  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:30.205135  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:30.242407  152982 cri.go:89] found id: ""
	I0826 12:13:30.242442  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.242455  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:30.242463  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:30.242532  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:30.280569  152982 cri.go:89] found id: ""
	I0826 12:13:30.280607  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.280619  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:30.280627  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:30.280684  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:30.317846  152982 cri.go:89] found id: ""
	I0826 12:13:30.317882  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.317892  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:30.317906  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:30.318011  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:30.354637  152982 cri.go:89] found id: ""
	I0826 12:13:30.354675  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.354686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:30.354698  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:30.354715  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:30.434983  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:30.435032  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.474170  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:30.474214  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:30.541092  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:30.541133  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:30.566671  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:30.566707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:30.659622  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:28.203031  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.204134  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:32.703767  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.754563  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.253557  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:31.122121  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.125357  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.622870  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.160831  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:33.174476  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:33.174556  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:33.213402  152982 cri.go:89] found id: ""
	I0826 12:13:33.213433  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.213441  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:33.213447  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:33.213505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:33.251024  152982 cri.go:89] found id: ""
	I0826 12:13:33.251056  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.251064  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:33.251070  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:33.251134  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:33.288839  152982 cri.go:89] found id: ""
	I0826 12:13:33.288873  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.288882  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:33.288889  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:33.288961  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:33.324289  152982 cri.go:89] found id: ""
	I0826 12:13:33.324321  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.324329  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:33.324335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:33.324404  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:33.358921  152982 cri.go:89] found id: ""
	I0826 12:13:33.358953  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.358961  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:33.358968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:33.359025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:33.394579  152982 cri.go:89] found id: ""
	I0826 12:13:33.394615  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.394623  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:33.394629  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:33.394700  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:33.429750  152982 cri.go:89] found id: ""
	I0826 12:13:33.429782  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.429794  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:33.429802  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:33.429863  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:33.465857  152982 cri.go:89] found id: ""
	I0826 12:13:33.465895  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.465908  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:33.465921  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:33.465939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:33.506312  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:33.506344  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:33.557235  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:33.557279  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:33.570259  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:33.570293  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:33.638927  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:33.638952  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:33.638973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:36.217153  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:36.230544  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:36.230630  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:36.283359  152982 cri.go:89] found id: ""
	I0826 12:13:36.283394  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.283405  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:36.283413  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:36.283486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:36.327991  152982 cri.go:89] found id: ""
	I0826 12:13:36.328017  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.328026  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:36.328031  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:36.328095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:36.380106  152982 cri.go:89] found id: ""
	I0826 12:13:36.380137  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.380147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:36.380154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:36.380212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:36.415844  152982 cri.go:89] found id: ""
	I0826 12:13:36.415872  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.415880  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:36.415886  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:36.415939  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:36.451058  152982 cri.go:89] found id: ""
	I0826 12:13:36.451131  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.451158  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:36.451168  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:36.451235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:36.485814  152982 cri.go:89] found id: ""
	I0826 12:13:36.485845  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.485856  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:36.485864  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:36.485943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:36.520811  152982 cri.go:89] found id: ""
	I0826 12:13:36.520848  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.520865  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:36.520876  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:36.520952  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:36.557835  152982 cri.go:89] found id: ""
	I0826 12:13:36.557866  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.557877  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:36.557897  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:36.557915  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:36.609551  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:36.609594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:36.624424  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:36.624453  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:36.697267  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:36.697294  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:36.697312  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:34.704284  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.203717  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.752752  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:38.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.622907  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.121820  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:36.781810  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:36.781862  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.326306  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:39.340161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:39.340229  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:39.373614  152982 cri.go:89] found id: ""
	I0826 12:13:39.373646  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.373655  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:39.373664  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:39.373732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:39.408021  152982 cri.go:89] found id: ""
	I0826 12:13:39.408059  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.408067  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:39.408073  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:39.408127  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:39.450503  152982 cri.go:89] found id: ""
	I0826 12:13:39.450531  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.450541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:39.450549  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:39.450624  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:39.487553  152982 cri.go:89] found id: ""
	I0826 12:13:39.487585  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.487596  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:39.487625  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:39.487695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:39.524701  152982 cri.go:89] found id: ""
	I0826 12:13:39.524734  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.524745  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:39.524753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:39.524822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:39.557863  152982 cri.go:89] found id: ""
	I0826 12:13:39.557893  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.557903  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:39.557911  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:39.557979  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:39.593456  152982 cri.go:89] found id: ""
	I0826 12:13:39.593486  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.593496  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:39.593504  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:39.593577  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:39.628444  152982 cri.go:89] found id: ""
	I0826 12:13:39.628472  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.628481  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:39.628490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:39.628503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.668929  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:39.668967  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:39.724948  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:39.725003  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:39.740014  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:39.740060  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:39.814786  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:39.814811  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:39.814828  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:39.704050  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:41.704769  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.752827  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.753423  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.122285  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.622043  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.393781  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:42.407529  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:42.407620  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:42.444273  152982 cri.go:89] found id: ""
	I0826 12:13:42.444305  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.444314  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:42.444321  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:42.444389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:42.478683  152982 cri.go:89] found id: ""
	I0826 12:13:42.478724  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.478734  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:42.478741  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:42.478803  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:42.520650  152982 cri.go:89] found id: ""
	I0826 12:13:42.520684  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.520708  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:42.520715  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:42.520774  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:42.558610  152982 cri.go:89] found id: ""
	I0826 12:13:42.558656  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.558667  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:42.558677  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:42.558750  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:42.593960  152982 cri.go:89] found id: ""
	I0826 12:13:42.593991  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.593999  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:42.594006  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:42.594064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:42.628257  152982 cri.go:89] found id: ""
	I0826 12:13:42.628284  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.628294  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:42.628300  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:42.628372  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:42.669894  152982 cri.go:89] found id: ""
	I0826 12:13:42.669933  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.669946  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:42.669956  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:42.670029  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:42.707893  152982 cri.go:89] found id: ""
	I0826 12:13:42.707923  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.707934  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:42.707946  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:42.707962  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:42.760778  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:42.760823  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:42.773718  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:42.773753  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:42.855780  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:42.855813  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:42.855831  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:42.934872  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:42.934925  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:45.473505  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:45.488485  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:45.488582  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:45.524355  152982 cri.go:89] found id: ""
	I0826 12:13:45.524387  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.524398  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:45.524407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:45.524474  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:45.563731  152982 cri.go:89] found id: ""
	I0826 12:13:45.563758  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.563767  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:45.563772  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:45.563832  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:45.595876  152982 cri.go:89] found id: ""
	I0826 12:13:45.595910  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.595918  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:45.595924  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:45.595977  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:45.629212  152982 cri.go:89] found id: ""
	I0826 12:13:45.629246  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.629256  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:45.629262  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:45.629316  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:45.662718  152982 cri.go:89] found id: ""
	I0826 12:13:45.662748  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.662759  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:45.662766  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:45.662851  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:45.697540  152982 cri.go:89] found id: ""
	I0826 12:13:45.697573  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.697585  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:45.697598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:45.697670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:45.738012  152982 cri.go:89] found id: ""
	I0826 12:13:45.738054  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.738067  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:45.738077  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:45.738174  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:45.778322  152982 cri.go:89] found id: ""
	I0826 12:13:45.778352  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.778364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:45.778376  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:45.778395  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:45.830530  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:45.830570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:45.845289  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:45.845335  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:45.918490  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:45.918514  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:45.918528  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:45.998762  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:45.998806  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:44.204527  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.204789  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.753605  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.754396  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.255176  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.622584  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.122691  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:48.540076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:48.554537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:48.554616  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:48.589750  152982 cri.go:89] found id: ""
	I0826 12:13:48.589783  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.589792  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:48.589799  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:48.589866  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.628496  152982 cri.go:89] found id: ""
	I0826 12:13:48.628530  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.628540  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:48.628557  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:48.628635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:48.670630  152982 cri.go:89] found id: ""
	I0826 12:13:48.670667  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.670678  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:48.670686  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:48.670756  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:48.707510  152982 cri.go:89] found id: ""
	I0826 12:13:48.707543  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.707564  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:48.707572  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:48.707642  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:48.752189  152982 cri.go:89] found id: ""
	I0826 12:13:48.752222  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.752231  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:48.752237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:48.752306  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:48.788294  152982 cri.go:89] found id: ""
	I0826 12:13:48.788332  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.788356  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:48.788364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:48.788439  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:48.822728  152982 cri.go:89] found id: ""
	I0826 12:13:48.822755  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.822765  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:48.822771  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:48.822850  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:48.859237  152982 cri.go:89] found id: ""
	I0826 12:13:48.859270  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.859280  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:48.859293  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:48.859310  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:48.944271  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:48.944322  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:48.983438  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:48.983477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:49.036463  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:49.036511  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:49.051081  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:49.051123  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:49.127953  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:51.629023  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:51.643644  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:51.643728  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:51.684273  152982 cri.go:89] found id: ""
	I0826 12:13:51.684310  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.684323  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:51.684331  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:51.684401  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.703794  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:50.703872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:52.705329  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.753669  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.252960  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.623221  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.121867  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.720561  152982 cri.go:89] found id: ""
	I0826 12:13:51.720600  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.720610  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:51.720616  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:51.720690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:51.758023  152982 cri.go:89] found id: ""
	I0826 12:13:51.758049  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.758057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:51.758063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:51.758123  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:51.797029  152982 cri.go:89] found id: ""
	I0826 12:13:51.797063  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.797075  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:51.797082  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:51.797150  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:51.832002  152982 cri.go:89] found id: ""
	I0826 12:13:51.832032  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.832043  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:51.832051  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:51.832122  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:51.867042  152982 cri.go:89] found id: ""
	I0826 12:13:51.867074  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.867083  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:51.867090  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:51.867155  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:51.904887  152982 cri.go:89] found id: ""
	I0826 12:13:51.904919  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.904931  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:51.904938  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:51.905005  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:51.940628  152982 cri.go:89] found id: ""
	I0826 12:13:51.940662  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.940674  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:51.940686  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:51.940703  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:51.979988  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:51.980021  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:52.033297  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:52.033338  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:52.047004  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:52.047039  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:52.126136  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:52.126163  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:52.126176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:54.711457  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:54.726419  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:54.726510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:54.773253  152982 cri.go:89] found id: ""
	I0826 12:13:54.773290  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.773304  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:54.773324  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:54.773397  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:54.812175  152982 cri.go:89] found id: ""
	I0826 12:13:54.812211  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.812232  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:54.812239  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:54.812298  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:54.848673  152982 cri.go:89] found id: ""
	I0826 12:13:54.848702  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.848710  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:54.848717  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:54.848782  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:54.884211  152982 cri.go:89] found id: ""
	I0826 12:13:54.884239  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.884252  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:54.884259  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:54.884329  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:54.925279  152982 cri.go:89] found id: ""
	I0826 12:13:54.925312  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.925323  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:54.925331  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:54.925406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:54.961004  152982 cri.go:89] found id: ""
	I0826 12:13:54.961035  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.961043  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:54.961050  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:54.961114  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:54.998689  152982 cri.go:89] found id: ""
	I0826 12:13:54.998720  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.998730  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:54.998737  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:54.998810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:55.033540  152982 cri.go:89] found id: ""
	I0826 12:13:55.033671  152982 logs.go:276] 0 containers: []
	W0826 12:13:55.033683  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:55.033696  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:55.033713  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:55.082966  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:55.083006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:55.096472  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:55.096503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:55.166868  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:55.166899  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:55.166917  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:55.260596  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:55.260637  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:55.206106  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.704214  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.253114  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.254749  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.122385  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.124183  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:00.622721  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.804727  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:57.818098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:57.818188  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:57.852777  152982 cri.go:89] found id: ""
	I0826 12:13:57.852819  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.852832  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:57.852841  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:57.852906  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:57.888778  152982 cri.go:89] found id: ""
	I0826 12:13:57.888815  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.888832  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:57.888840  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:57.888924  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:57.927398  152982 cri.go:89] found id: ""
	I0826 12:13:57.927432  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.927444  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:57.927452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:57.927527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:57.965373  152982 cri.go:89] found id: ""
	I0826 12:13:57.965402  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.965420  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:57.965425  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:57.965488  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:57.999334  152982 cri.go:89] found id: ""
	I0826 12:13:57.999366  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.999374  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:57.999380  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:57.999441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:58.035268  152982 cri.go:89] found id: ""
	I0826 12:13:58.035299  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.035308  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:58.035313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:58.035373  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:58.070055  152982 cri.go:89] found id: ""
	I0826 12:13:58.070088  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.070099  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:58.070107  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:58.070176  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:58.104845  152982 cri.go:89] found id: ""
	I0826 12:13:58.104882  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.104893  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:58.104906  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:58.104923  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:58.149392  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:58.149427  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:58.201310  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:58.201345  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:58.217027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:58.217067  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:58.301347  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.301372  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:58.301389  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:00.881924  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:00.897716  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:14:00.897804  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:14:00.934959  152982 cri.go:89] found id: ""
	I0826 12:14:00.934993  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.935005  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:14:00.935013  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:14:00.935086  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:14:00.969225  152982 cri.go:89] found id: ""
	I0826 12:14:00.969257  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.969266  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:14:00.969272  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:14:00.969344  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:14:01.004010  152982 cri.go:89] found id: ""
	I0826 12:14:01.004047  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.004057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:14:01.004063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:14:01.004136  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:14:01.039659  152982 cri.go:89] found id: ""
	I0826 12:14:01.039689  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.039697  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:14:01.039704  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:14:01.039758  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:14:01.073234  152982 cri.go:89] found id: ""
	I0826 12:14:01.073266  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.073278  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:14:01.073293  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:14:01.073370  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:14:01.111187  152982 cri.go:89] found id: ""
	I0826 12:14:01.111229  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.111243  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:14:01.111261  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:14:01.111331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:14:01.145754  152982 cri.go:89] found id: ""
	I0826 12:14:01.145791  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.145803  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:14:01.145811  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:14:01.145885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:14:01.182342  152982 cri.go:89] found id: ""
	I0826 12:14:01.182386  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.182398  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:14:01.182412  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:14:01.182434  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:01.266710  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:14:01.266754  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:14:01.305346  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:14:01.305385  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:14:01.356704  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:14:01.356745  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:14:01.370117  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:14:01.370149  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:14:01.440661  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.198044  152550 pod_ready.go:82] duration metric: took 4m0.000989551s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	E0826 12:13:58.198094  152550 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:13:58.198117  152550 pod_ready.go:39] duration metric: took 4m12.634931094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:13:58.198155  152550 kubeadm.go:597] duration metric: took 4m20.008849713s to restartPrimaryControlPlane
	W0826 12:13:58.198303  152550 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:13:58.198455  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:00.756478  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.253496  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.941691  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:03.956386  152982 kubeadm.go:597] duration metric: took 4m3.440941217s to restartPrimaryControlPlane
	W0826 12:14:03.956466  152982 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:03.956493  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:04.426489  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:04.441881  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:04.452877  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:04.463304  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:04.463332  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:04.463380  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:04.473208  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:04.473290  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:04.483666  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:04.494051  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:04.494177  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:04.504320  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.514099  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:04.514174  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.524235  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:04.533899  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:04.533984  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:04.544851  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:04.618397  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:14:04.618498  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:04.760383  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:04.760547  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:04.760690  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:14:04.953284  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:02.622852  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:05.122408  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:04.955371  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:04.955481  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:04.955563  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:04.955664  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:04.955738  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:04.955850  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:04.955953  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:04.956047  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:04.956133  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:04.956239  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:04.956306  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:04.956366  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:04.956455  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:05.401019  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:05.543601  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:05.641242  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:05.716524  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:05.737543  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:05.739428  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:05.739530  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:05.887203  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:05.889144  152982 out.go:235]   - Booting up control plane ...
	I0826 12:14:05.889288  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:05.891248  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:05.892518  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:05.894610  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:05.899134  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:14:05.753455  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.754033  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.622166  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:09.623006  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:10.253568  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.255058  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.122796  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.622774  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.753807  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.253632  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.254808  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.123304  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.622567  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.257450  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.752912  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.623069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.624561  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.253685  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.752880  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.122470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.623195  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:29.414342  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.215853526s)
	I0826 12:14:29.414450  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:29.436730  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:29.449421  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:29.462320  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:29.462349  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:29.462411  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:29.473119  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:29.473189  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:29.493795  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:29.516473  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:29.516563  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:29.528887  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.537934  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:29.538011  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.548384  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:29.557588  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:29.557659  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:29.567544  152550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:29.611274  152550 kubeadm.go:310] W0826 12:14:29.589660    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.612346  152550 kubeadm.go:310] W0826 12:14:29.590990    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.731352  152550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:14:30.755803  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.252679  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:31.123036  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.623654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:35.623993  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:38.120098  152550 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:14:38.120187  152550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:38.120283  152550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:38.120428  152550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:38.120548  152550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:14:38.120643  152550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:38.122417  152550 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:38.122519  152550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:38.122590  152550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:38.122681  152550 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:38.122766  152550 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:38.122884  152550 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:38.122960  152550 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:38.123047  152550 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:38.123146  152550 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:38.123242  152550 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:38.123316  152550 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:38.123350  152550 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:38.123394  152550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:38.123481  152550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:38.123531  152550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:14:38.123602  152550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:38.123656  152550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:38.123702  152550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:38.123770  152550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:38.123830  152550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:38.126005  152550 out.go:235]   - Booting up control plane ...
	I0826 12:14:38.126111  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:38.126209  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:38.126293  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:38.126433  152550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:38.126541  152550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:38.126619  152550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:38.126796  152550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:14:38.126975  152550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:14:38.127064  152550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001663066s
	I0826 12:14:38.127156  152550 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:14:38.127239  152550 kubeadm.go:310] [api-check] The API server is healthy after 4.502197821s
	I0826 12:14:38.127376  152550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:14:38.127527  152550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:14:38.127622  152550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:14:38.127799  152550 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-923586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:14:38.127882  152550 kubeadm.go:310] [bootstrap-token] Using token: uk5nes.r9l047sx2ciq7ja8
	I0826 12:14:38.129135  152550 out.go:235]   - Configuring RBAC rules ...
	I0826 12:14:38.129255  152550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:14:38.129363  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:14:38.129493  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:14:38.129668  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:14:38.129810  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:14:38.129908  152550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:14:38.130016  152550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:14:38.130071  152550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:14:38.130114  152550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:14:38.130120  152550 kubeadm.go:310] 
	I0826 12:14:38.130173  152550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:14:38.130178  152550 kubeadm.go:310] 
	I0826 12:14:38.130239  152550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:14:38.130249  152550 kubeadm.go:310] 
	I0826 12:14:38.130269  152550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:14:38.130340  152550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:14:38.130414  152550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:14:38.130424  152550 kubeadm.go:310] 
	I0826 12:14:38.130501  152550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:14:38.130515  152550 kubeadm.go:310] 
	I0826 12:14:38.130583  152550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:14:38.130595  152550 kubeadm.go:310] 
	I0826 12:14:38.130676  152550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:14:38.130774  152550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:14:38.130889  152550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:14:38.130898  152550 kubeadm.go:310] 
	I0826 12:14:38.130984  152550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:14:38.131067  152550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:14:38.131086  152550 kubeadm.go:310] 
	I0826 12:14:38.131158  152550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131276  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:14:38.131297  152550 kubeadm.go:310] 	--control-plane 
	I0826 12:14:38.131301  152550 kubeadm.go:310] 
	I0826 12:14:38.131407  152550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:14:38.131419  152550 kubeadm.go:310] 
	I0826 12:14:38.131518  152550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131634  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:14:38.131651  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:14:38.131664  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:14:38.133846  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:14:35.752863  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.752967  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.116222  153366 pod_ready.go:82] duration metric: took 4m0.000438014s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	E0826 12:14:37.116261  153366 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:14:37.116289  153366 pod_ready.go:39] duration metric: took 4m10.542468189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:37.116344  153366 kubeadm.go:597] duration metric: took 4m19.458712933s to restartPrimaryControlPlane
	W0826 12:14:37.116458  153366 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:37.116493  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:38.135291  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:14:38.146512  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:14:38.165564  152550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:14:38.165694  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.165744  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-923586 minikube.k8s.io/updated_at=2024_08_26T12_14_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=embed-certs-923586 minikube.k8s.io/primary=true
	I0826 12:14:38.409452  152550 ops.go:34] apiserver oom_adj: -16
	I0826 12:14:38.409559  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.910300  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.410434  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.909691  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.410601  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.910375  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.410502  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.909663  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.409954  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.515793  152550 kubeadm.go:1113] duration metric: took 4.350161994s to wait for elevateKubeSystemPrivileges
	I0826 12:14:42.515834  152550 kubeadm.go:394] duration metric: took 5m4.371327443s to StartCluster
	I0826 12:14:42.515878  152550 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.515970  152550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:14:42.517781  152550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.518064  152550 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:14:42.518189  152550 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:14:42.518281  152550 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-923586"
	I0826 12:14:42.518296  152550 addons.go:69] Setting default-storageclass=true in profile "embed-certs-923586"
	I0826 12:14:42.518309  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:14:42.518339  152550 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-923586"
	W0826 12:14:42.518352  152550 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:14:42.518362  152550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-923586"
	I0826 12:14:42.518383  152550 addons.go:69] Setting metrics-server=true in profile "embed-certs-923586"
	I0826 12:14:42.518405  152550 addons.go:234] Setting addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:42.518409  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	W0826 12:14:42.518418  152550 addons.go:243] addon metrics-server should already be in state true
	I0826 12:14:42.518446  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.518852  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518865  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518829  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518905  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.519968  152550 out.go:177] * Verifying Kubernetes components...
	I0826 12:14:42.521761  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:14:42.537559  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0826 12:14:42.538127  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.538827  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.538891  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.539336  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.539636  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.540538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0826 12:14:42.540644  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0826 12:14:42.541179  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541244  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541681  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541695  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.541834  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541842  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.542936  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.542979  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.543441  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543490  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543551  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543577  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543637  152550 addons.go:234] Setting addon default-storageclass=true in "embed-certs-923586"
	W0826 12:14:42.543663  152550 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:14:42.543700  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.544040  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.544067  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.561871  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0826 12:14:42.562432  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.562957  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.562971  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.563394  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.563689  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.565675  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.565857  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0826 12:14:42.565980  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0826 12:14:42.566268  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566352  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566799  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.566815  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567209  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567364  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.567386  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567775  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567779  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.567855  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.567903  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.568183  152550 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:14:42.569717  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.569832  152550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.569854  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:14:42.569876  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.571655  152550 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:14:42.572951  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.572975  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:14:42.572988  152550 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:14:42.573009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.573393  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.573434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.573818  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.574020  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.574160  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.574454  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.576356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.576762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.576782  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.577099  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.577293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.577430  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.577564  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.586538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0826 12:14:42.587087  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.587574  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.587590  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.587849  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.588001  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.589835  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.590061  152550 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.590075  152550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:14:42.590089  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.592573  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.592861  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.592952  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.593269  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.593437  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.593541  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.593637  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.772651  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:14:42.795921  152550 node_ready.go:35] waiting up to 6m0s for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831853  152550 node_ready.go:49] node "embed-certs-923586" has status "Ready":"True"
	I0826 12:14:42.831881  152550 node_ready.go:38] duration metric: took 35.920093ms for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831893  152550 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:42.856949  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:42.924562  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.940640  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:14:42.940669  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:14:42.958680  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.975446  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:14:42.975481  152550 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:14:43.037862  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:43.037891  152550 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:14:43.105738  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:44.054921  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130312138s)
	I0826 12:14:44.054995  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055025  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096305238s)
	I0826 12:14:44.055070  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055087  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055330  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055394  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055408  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055416  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055423  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055444  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055395  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055498  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055512  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055520  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055719  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055724  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055734  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055858  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055898  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055923  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.075068  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.075100  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.075404  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.075424  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478321  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.372540463s)
	I0826 12:14:44.478382  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478402  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.478806  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.478864  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.478876  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478891  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478904  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.479161  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.479161  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.479189  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.479205  152550 addons.go:475] Verifying addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:44.482190  152550 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:14:40.254480  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:42.753499  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:45.900198  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:14:45.901204  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:45.901550  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:44.483577  152550 addons.go:510] duration metric: took 1.965385921s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:14:44.876221  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:44.876253  152550 pod_ready.go:82] duration metric: took 2.019275302s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.876270  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883514  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.883542  152550 pod_ready.go:82] duration metric: took 1.007263784s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883553  152550 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890724  152550 pod_ready.go:93] pod "etcd-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.890750  152550 pod_ready.go:82] duration metric: took 7.190212ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890760  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.754815  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.252702  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:49.254411  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.897138  152550 pod_ready.go:103] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:48.897502  152550 pod_ready.go:93] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:48.897529  152550 pod_ready.go:82] duration metric: took 3.006762275s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:48.897541  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905832  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.905858  152550 pod_ready.go:82] duration metric: took 2.008310051s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905870  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912639  152550 pod_ready.go:93] pod "kube-proxy-xnv2b" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.912679  152550 pod_ready.go:82] duration metric: took 6.793285ms for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912694  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918794  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.918819  152550 pod_ready.go:82] duration metric: took 6.117525ms for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918826  152550 pod_ready.go:39] duration metric: took 8.086922463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:50.918867  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:14:50.918928  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:50.936095  152550 api_server.go:72] duration metric: took 8.41799252s to wait for apiserver process to appear ...
	I0826 12:14:50.936126  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:14:50.936155  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:14:50.941142  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:14:50.942612  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:14:50.942653  152550 api_server.go:131] duration metric: took 6.519342ms to wait for apiserver health ...
	I0826 12:14:50.942664  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:14:50.947646  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:14:50.947675  152550 system_pods.go:61] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:50.947680  152550 system_pods.go:61] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:50.947684  152550 system_pods.go:61] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:50.947688  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:50.947691  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:50.947694  152550 system_pods.go:61] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:50.947699  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:50.947705  152550 system_pods.go:61] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:50.947709  152550 system_pods.go:61] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:50.947717  152550 system_pods.go:74] duration metric: took 5.046771ms to wait for pod list to return data ...
	I0826 12:14:50.947723  152550 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:14:50.950716  152550 default_sa.go:45] found service account: "default"
	I0826 12:14:50.950744  152550 default_sa.go:55] duration metric: took 3.014513ms for default service account to be created ...
	I0826 12:14:50.950756  152550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:14:51.063812  152550 system_pods.go:86] 9 kube-system pods found
	I0826 12:14:51.063849  152550 system_pods.go:89] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:51.063858  152550 system_pods.go:89] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:51.063864  152550 system_pods.go:89] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:51.063869  152550 system_pods.go:89] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:51.063875  152550 system_pods.go:89] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:51.063880  152550 system_pods.go:89] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:51.063886  152550 system_pods.go:89] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:51.063894  152550 system_pods.go:89] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:51.063901  152550 system_pods.go:89] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:51.063914  152550 system_pods.go:126] duration metric: took 113.151196ms to wait for k8s-apps to be running ...
	I0826 12:14:51.063925  152550 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:14:51.063978  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:51.079783  152550 system_svc.go:56] duration metric: took 15.845401ms WaitForService to wait for kubelet
	I0826 12:14:51.079821  152550 kubeadm.go:582] duration metric: took 8.56172531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:14:51.079848  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:14:51.262166  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:14:51.262194  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:14:51.262233  152550 node_conditions.go:105] duration metric: took 182.377973ms to run NodePressure ...
	I0826 12:14:51.262248  152550 start.go:241] waiting for startup goroutines ...
	I0826 12:14:51.262258  152550 start.go:246] waiting for cluster config update ...
	I0826 12:14:51.262272  152550 start.go:255] writing updated cluster config ...
	I0826 12:14:51.262587  152550 ssh_runner.go:195] Run: rm -f paused
	I0826 12:14:51.317881  152550 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:14:51.319950  152550 out.go:177] * Done! kubectl is now configured to use "embed-certs-923586" cluster and "default" namespace by default
	I0826 12:14:50.901903  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:50.902179  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:51.256756  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:53.755801  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:56.253848  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:58.254315  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:00.902494  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:00.902754  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:03.257214  153366 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140694693s)
	I0826 12:15:03.257298  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:03.273530  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:03.284370  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:03.294199  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:03.294221  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:03.294270  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:15:03.303856  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:03.303938  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:03.313935  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:15:03.323395  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:03.323477  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:03.333728  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.343369  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:03.343452  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.353456  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:15:03.363384  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:03.363472  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:03.373738  153366 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:03.422068  153366 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:03.422173  153366 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:03.535516  153366 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:03.535649  153366 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:03.535775  153366 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:03.550873  153366 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:03.552861  153366 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:03.552969  153366 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:03.553038  153366 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:03.553138  153366 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:03.553218  153366 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:03.553319  153366 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:03.553385  153366 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:03.553462  153366 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:03.553536  153366 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:03.553674  153366 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:03.553810  153366 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:03.553854  153366 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:03.553906  153366 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:03.650986  153366 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:03.737989  153366 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:03.981919  153366 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:04.322809  153366 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:04.378495  153366 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:04.379108  153366 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:04.382061  153366 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:00.753091  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:02.753181  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:04.384093  153366 out.go:235]   - Booting up control plane ...
	I0826 12:15:04.384215  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:04.384313  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:04.384401  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:04.405533  153366 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:04.411925  153366 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:04.411998  153366 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:04.548438  153366 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:04.548626  153366 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:05.049451  153366 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.31099ms
	I0826 12:15:05.049526  153366 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:05.253970  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:07.753555  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.051568  153366 kubeadm.go:310] [api-check] The API server is healthy after 5.001973036s
	I0826 12:15:10.066691  153366 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:10.086381  153366 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:10.122144  153366 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:10.122349  153366 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-697869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:10.138374  153366 kubeadm.go:310] [bootstrap-token] Using token: amrfa7.mjk6u0x9vle6unng
	I0826 12:15:10.139885  153366 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:10.140032  153366 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:10.156541  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:10.167826  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:10.174587  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:10.179100  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:10.191798  153366 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:10.465168  153366 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:10.905160  153366 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:11.461111  153366 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:11.461144  153366 kubeadm.go:310] 
	I0826 12:15:11.461234  153366 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:11.461246  153366 kubeadm.go:310] 
	I0826 12:15:11.461381  153366 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:11.461404  153366 kubeadm.go:310] 
	I0826 12:15:11.461439  153366 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:11.461530  153366 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:11.461655  153366 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:11.461667  153366 kubeadm.go:310] 
	I0826 12:15:11.461761  153366 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:11.461776  153366 kubeadm.go:310] 
	I0826 12:15:11.461841  153366 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:11.461855  153366 kubeadm.go:310] 
	I0826 12:15:11.461951  153366 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:11.462070  153366 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:11.462171  153366 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:11.462181  153366 kubeadm.go:310] 
	I0826 12:15:11.462305  153366 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:11.462432  153366 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:11.462443  153366 kubeadm.go:310] 
	I0826 12:15:11.462557  153366 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.462694  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:11.462729  153366 kubeadm.go:310] 	--control-plane 
	I0826 12:15:11.462742  153366 kubeadm.go:310] 
	I0826 12:15:11.462862  153366 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:11.462879  153366 kubeadm.go:310] 
	I0826 12:15:11.463004  153366 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.463151  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:11.463695  153366 kubeadm.go:310] W0826 12:15:03.397375    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464127  153366 kubeadm.go:310] W0826 12:15:03.398283    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464277  153366 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:11.464314  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:15:11.464324  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:11.467369  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:09.754135  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.247470  152463 pod_ready.go:82] duration metric: took 4m0.000930829s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	E0826 12:15:10.247510  152463 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:15:10.247531  152463 pod_ready.go:39] duration metric: took 4m13.959337221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:10.247571  152463 kubeadm.go:597] duration metric: took 4m20.649627423s to restartPrimaryControlPlane
	W0826 12:15:10.247641  152463 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:15:10.247671  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:15:11.468809  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:11.480030  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:11.503412  153366 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:11.503518  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:11.503558  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-697869 minikube.k8s.io/updated_at=2024_08_26T12_15_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=default-k8s-diff-port-697869 minikube.k8s.io/primary=true
	I0826 12:15:11.724406  153366 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:11.724524  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.225088  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.725598  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.225161  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.724619  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.225467  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.724756  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.224733  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.724555  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.869377  153366 kubeadm.go:1113] duration metric: took 4.365927713s to wait for elevateKubeSystemPrivileges
	I0826 12:15:15.869426  153366 kubeadm.go:394] duration metric: took 4m58.261516694s to StartCluster
	I0826 12:15:15.869450  153366 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.869547  153366 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:15.872248  153366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.872615  153366 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:15.872724  153366 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:15.872819  153366 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872837  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:15.872839  153366 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872858  153366 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872872  153366 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:15.872887  153366 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872908  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872919  153366 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872927  153366 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:15.872959  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872890  153366 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-697869"
	I0826 12:15:15.873361  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873403  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873418  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873465  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.874128  153366 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:15.875341  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:15.894326  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0826 12:15:15.894578  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0826 12:15:15.895050  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895104  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38885
	I0826 12:15:15.895131  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895609  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895629  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895612  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895658  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895696  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.896010  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896059  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896145  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.896164  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.896261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.896493  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896650  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.896675  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.896977  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.897022  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.899881  153366 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.899904  153366 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:15.899935  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.900218  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.900255  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.914959  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0826 12:15:15.915525  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.915993  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.916017  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.916418  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.916451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0826 12:15:15.916588  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.916681  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0826 12:15:15.916999  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.917629  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.917643  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.918129  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.918298  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.918337  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.919305  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.919920  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.919947  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.920096  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.920226  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.920281  153366 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:15.920702  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.920724  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.921464  153366 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:15.921468  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:15.921554  153366 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:15.921575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.923028  153366 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:15.923051  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:15.923072  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.926224  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926877  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926895  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926900  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.927101  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927141  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927313  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927329  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927509  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927677  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.927774  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.945639  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0826 12:15:15.946164  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.946704  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.946726  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.947148  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.947420  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.949257  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.949524  153366 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:15.949544  153366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:15.949573  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.952861  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953407  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.953440  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953604  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.953816  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.953971  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.954108  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:16.119775  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:16.141629  153366 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167775  153366 node_ready.go:49] node "default-k8s-diff-port-697869" has status "Ready":"True"
	I0826 12:15:16.167813  153366 node_ready.go:38] duration metric: took 26.141251ms for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167823  153366 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:16.174824  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:16.265371  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:16.273443  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:16.273479  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:16.295175  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:16.301027  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:16.301063  153366 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:16.351346  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:16.351372  153366 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:16.536263  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:17.254787  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254820  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.254872  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254896  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255317  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255371  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255394  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255396  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255397  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255354  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255412  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255447  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255425  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255497  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255721  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255735  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255839  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255860  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255883  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.279566  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.279589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.279893  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.279914  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792266  153366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255954534s)
	I0826 12:15:17.792329  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792341  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792687  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.792714  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792727  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792737  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792693  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.793052  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.793070  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.793083  153366 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-697869"
	I0826 12:15:17.795156  153366 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:15:17.796583  153366 addons.go:510] duration metric: took 1.923858399s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:15:18.183088  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.682427  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.903394  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:20.903620  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:21.684011  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.684037  153366 pod_ready.go:82] duration metric: took 5.509158352s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.684047  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689145  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.689170  153366 pod_ready.go:82] duration metric: took 5.117406ms for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689180  153366 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695856  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.695897  153366 pod_ready.go:82] duration metric: took 2.006709056s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695912  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700548  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.700572  153366 pod_ready.go:82] duration metric: took 4.650988ms for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700583  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705425  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.705449  153366 pod_ready.go:82] duration metric: took 4.857442ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705461  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710336  153366 pod_ready.go:93] pod "kube-proxy-fkklg" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.710368  153366 pod_ready.go:82] duration metric: took 4.897388ms for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710380  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079760  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:24.079791  153366 pod_ready.go:82] duration metric: took 369.402007ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079803  153366 pod_ready.go:39] duration metric: took 7.911968599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:24.079826  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:24.079905  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:24.096351  153366 api_server.go:72] duration metric: took 8.22368917s to wait for apiserver process to appear ...
	I0826 12:15:24.096380  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:24.096401  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:15:24.100636  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:15:24.102197  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:24.102228  153366 api_server.go:131] duration metric: took 5.839895ms to wait for apiserver health ...
	I0826 12:15:24.102239  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:24.282080  153366 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:24.282111  153366 system_pods.go:61] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.282116  153366 system_pods.go:61] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.282120  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.282124  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.282128  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.282131  153366 system_pods.go:61] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.282134  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.282141  153366 system_pods.go:61] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.282148  153366 system_pods.go:61] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.282160  153366 system_pods.go:74] duration metric: took 179.913782ms to wait for pod list to return data ...
	I0826 12:15:24.282174  153366 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:24.478697  153366 default_sa.go:45] found service account: "default"
	I0826 12:15:24.478725  153366 default_sa.go:55] duration metric: took 196.543227ms for default service account to be created ...
	I0826 12:15:24.478735  153366 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:24.681990  153366 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:24.682024  153366 system_pods.go:89] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.682033  153366 system_pods.go:89] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.682039  153366 system_pods.go:89] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.682047  153366 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.682053  153366 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.682059  153366 system_pods.go:89] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.682064  153366 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.682074  153366 system_pods.go:89] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.682084  153366 system_pods.go:89] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.682099  153366 system_pods.go:126] duration metric: took 203.358223ms to wait for k8s-apps to be running ...
	I0826 12:15:24.682112  153366 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:24.682176  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:24.696733  153366 system_svc.go:56] duration metric: took 14.61027ms WaitForService to wait for kubelet
	I0826 12:15:24.696763  153366 kubeadm.go:582] duration metric: took 8.824109304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:24.696783  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:24.879924  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:24.879956  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:24.879966  153366 node_conditions.go:105] duration metric: took 183.178992ms to run NodePressure ...
	I0826 12:15:24.879990  153366 start.go:241] waiting for startup goroutines ...
	I0826 12:15:24.879997  153366 start.go:246] waiting for cluster config update ...
	I0826 12:15:24.880010  153366 start.go:255] writing updated cluster config ...
	I0826 12:15:24.880311  153366 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:24.930941  153366 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:24.933196  153366 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-697869" cluster and "default" namespace by default
	I0826 12:15:36.323870  152463 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.076163509s)
	I0826 12:15:36.323965  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:36.347973  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:36.368968  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:36.382879  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:36.382903  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:36.382963  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:15:36.416659  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:36.416743  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:36.429514  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:15:36.451301  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:36.451385  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:36.462051  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.472004  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:36.472067  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.482273  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:15:36.492841  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:36.492912  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:36.504817  152463 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:36.551754  152463 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:36.551829  152463 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:36.672687  152463 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:36.672864  152463 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:36.672989  152463 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:36.683235  152463 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:36.685324  152463 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:36.685440  152463 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:36.685547  152463 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:36.685629  152463 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:36.685682  152463 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:36.685739  152463 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:36.685783  152463 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:36.685831  152463 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:36.686022  152463 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:36.686468  152463 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:36.686945  152463 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:36.687303  152463 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:36.687378  152463 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:36.967134  152463 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:37.077904  152463 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:37.371185  152463 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:37.555065  152463 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:37.634464  152463 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:37.634927  152463 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:37.638560  152463 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:37.640588  152463 out.go:235]   - Booting up control plane ...
	I0826 12:15:37.640726  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:37.640832  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:37.642937  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:37.662774  152463 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:37.672492  152463 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:37.672548  152463 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:37.813958  152463 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:37.814108  152463 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:38.316718  152463 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.741081ms
	I0826 12:15:38.316861  152463 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:43.318178  152463 kubeadm.go:310] [api-check] The API server is healthy after 5.001355764s
	I0826 12:15:43.331536  152463 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:43.349535  152463 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:43.387824  152463 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:43.388114  152463 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-956479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:43.405027  152463 kubeadm.go:310] [bootstrap-token] Using token: ukbhjp.blg8kbhpg1wwmixs
	I0826 12:15:43.406880  152463 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:43.407022  152463 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:43.422870  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:43.436842  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:43.444123  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:43.454773  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:43.467173  152463 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:43.727266  152463 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:44.155916  152463 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:44.726922  152463 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:44.727276  152463 kubeadm.go:310] 
	I0826 12:15:44.727355  152463 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:44.727366  152463 kubeadm.go:310] 
	I0826 12:15:44.727452  152463 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:44.727461  152463 kubeadm.go:310] 
	I0826 12:15:44.727501  152463 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:44.727596  152463 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:44.727678  152463 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:44.727692  152463 kubeadm.go:310] 
	I0826 12:15:44.727778  152463 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:44.727803  152463 kubeadm.go:310] 
	I0826 12:15:44.727880  152463 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:44.727890  152463 kubeadm.go:310] 
	I0826 12:15:44.727958  152463 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:44.728059  152463 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:44.728157  152463 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:44.728170  152463 kubeadm.go:310] 
	I0826 12:15:44.728278  152463 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:44.728381  152463 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:44.728390  152463 kubeadm.go:310] 
	I0826 12:15:44.728500  152463 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.728621  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:44.728650  152463 kubeadm.go:310] 	--control-plane 
	I0826 12:15:44.728655  152463 kubeadm.go:310] 
	I0826 12:15:44.728763  152463 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:44.728773  152463 kubeadm.go:310] 
	I0826 12:15:44.728879  152463 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.729000  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:44.730448  152463 kubeadm.go:310] W0826 12:15:36.526674    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730826  152463 kubeadm.go:310] W0826 12:15:36.527559    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730958  152463 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:44.730985  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:15:44.731006  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:44.732918  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:44.734123  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:44.746466  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:44.766371  152463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:44.766444  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:44.766500  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-956479 minikube.k8s.io/updated_at=2024_08_26T12_15_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=no-preload-956479 minikube.k8s.io/primary=true
	I0826 12:15:44.816160  152463 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:44.979504  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.479661  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.980448  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.479729  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.980060  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.479789  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.980142  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.479669  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.567890  152463 kubeadm.go:1113] duration metric: took 3.801513957s to wait for elevateKubeSystemPrivileges
	I0826 12:15:48.567928  152463 kubeadm.go:394] duration metric: took 4m59.024259276s to StartCluster
	I0826 12:15:48.567954  152463 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.568058  152463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:48.569638  152463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.569928  152463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:48.570009  152463 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:48.570072  152463 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956479"
	I0826 12:15:48.570106  152463 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956479"
	W0826 12:15:48.570120  152463 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:48.570111  152463 addons.go:69] Setting default-storageclass=true in profile "no-preload-956479"
	I0826 12:15:48.570136  152463 addons.go:69] Setting metrics-server=true in profile "no-preload-956479"
	I0826 12:15:48.570154  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570164  152463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956479"
	I0826 12:15:48.570168  152463 addons.go:234] Setting addon metrics-server=true in "no-preload-956479"
	W0826 12:15:48.570179  152463 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:48.570189  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:48.570209  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570485  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570551  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570575  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570609  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570621  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570654  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.572265  152463 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:48.573970  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:48.587085  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0826 12:15:48.587132  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0826 12:15:48.587291  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0826 12:15:48.587551  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.587597  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588312  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588331  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588376  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588491  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588509  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588696  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588878  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588965  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588978  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.589237  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589273  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589402  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589427  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589780  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.590142  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.593429  152463 addons.go:234] Setting addon default-storageclass=true in "no-preload-956479"
	W0826 12:15:48.593450  152463 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:48.593479  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.593765  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.593796  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.606920  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0826 12:15:48.607123  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0826 12:15:48.607641  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.607775  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.608233  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608253  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608389  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608401  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608881  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609068  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.609126  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609286  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.611449  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0826 12:15:48.611638  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612161  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612164  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.612932  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.612954  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.613327  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.613815  152463 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:48.614020  152463 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:48.614913  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.614969  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.615993  152463 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:48.616019  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:48.616035  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.616812  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:48.616831  152463 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:48.616854  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.619999  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.620553  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.620591  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.621629  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.621699  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621845  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.621868  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621914  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622126  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.622296  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.622459  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622662  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.622728  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.633310  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0826 12:15:48.633834  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.634438  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.634492  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.634892  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.635131  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.636967  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.637184  152463 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.637204  152463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:48.637225  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.640306  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.640677  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.640710  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.641042  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.641260  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.641483  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.641743  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.771258  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:48.788808  152463 node_ready.go:35] waiting up to 6m0s for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800881  152463 node_ready.go:49] node "no-preload-956479" has status "Ready":"True"
	I0826 12:15:48.800916  152463 node_ready.go:38] duration metric: took 12.068483ms for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800926  152463 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:48.806760  152463 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:48.859878  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:48.859902  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:48.863874  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.884910  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:48.884940  152463 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:48.905108  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.905139  152463 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:48.929466  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.968025  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:49.143607  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.143634  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.143980  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.144039  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144048  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144056  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.144063  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.144396  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144421  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144399  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177127  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.177157  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.177586  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177590  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.177610  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170421  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240899569s)
	I0826 12:15:50.170493  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170509  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.170879  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.170896  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.170919  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170934  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170947  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.171212  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.171232  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.171278  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.171294  152463 addons.go:475] Verifying addon metrics-server=true in "no-preload-956479"
	I0826 12:15:50.240347  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.272272683s)
	I0826 12:15:50.240403  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240416  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.240837  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.240861  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.240867  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.240871  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240906  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.241192  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.241208  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.243352  152463 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0826 12:15:50.244857  152463 addons.go:510] duration metric: took 1.674848626s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0826 12:15:50.821689  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:53.313148  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:54.313605  152463 pod_ready.go:93] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:54.313634  152463 pod_ready.go:82] duration metric: took 5.506845108s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:54.313646  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.320782  152463 pod_ready.go:103] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:56.822596  152463 pod_ready.go:93] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.822626  152463 pod_ready.go:82] duration metric: took 2.508972184s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.822652  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829472  152463 pod_ready.go:93] pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.829497  152463 pod_ready.go:82] duration metric: took 6.836827ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829508  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835063  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.835087  152463 pod_ready.go:82] duration metric: took 5.573211ms for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835095  152463 pod_ready.go:39] duration metric: took 8.03415934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:56.835111  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:56.835162  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:56.852565  152463 api_server.go:72] duration metric: took 8.282599518s to wait for apiserver process to appear ...
	I0826 12:15:56.852595  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:56.852614  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:15:56.857431  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:15:56.858525  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:56.858548  152463 api_server.go:131] duration metric: took 5.945927ms to wait for apiserver health ...
	I0826 12:15:56.858556  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:56.863726  152463 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:56.863750  152463 system_pods.go:61] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.863757  152463 system_pods.go:61] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.863762  152463 system_pods.go:61] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.863768  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.863773  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.863776  152463 system_pods.go:61] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.863780  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.863784  152463 system_pods.go:61] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.863788  152463 system_pods.go:61] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.863794  152463 system_pods.go:74] duration metric: took 5.233096ms to wait for pod list to return data ...
	I0826 12:15:56.863801  152463 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:56.866245  152463 default_sa.go:45] found service account: "default"
	I0826 12:15:56.866263  152463 default_sa.go:55] duration metric: took 2.456594ms for default service account to be created ...
	I0826 12:15:56.866270  152463 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:56.870592  152463 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:56.870614  152463 system_pods.go:89] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.870621  152463 system_pods.go:89] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.870626  152463 system_pods.go:89] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.870634  152463 system_pods.go:89] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.870640  152463 system_pods.go:89] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.870645  152463 system_pods.go:89] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.870656  152463 system_pods.go:89] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.870663  152463 system_pods.go:89] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.870673  152463 system_pods.go:89] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.870681  152463 system_pods.go:126] duration metric: took 4.405758ms to wait for k8s-apps to be running ...
	I0826 12:15:56.870688  152463 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:56.870736  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:56.886533  152463 system_svc.go:56] duration metric: took 15.833026ms WaitForService to wait for kubelet
	I0826 12:15:56.886582  152463 kubeadm.go:582] duration metric: took 8.316620619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:56.886607  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:56.895864  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:56.895902  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:56.895917  152463 node_conditions.go:105] duration metric: took 9.302123ms to run NodePressure ...
	I0826 12:15:56.895934  152463 start.go:241] waiting for startup goroutines ...
	I0826 12:15:56.895945  152463 start.go:246] waiting for cluster config update ...
	I0826 12:15:56.895960  152463 start.go:255] writing updated cluster config ...
	I0826 12:15:56.896336  152463 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:56.947198  152463 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:56.949119  152463 out.go:177] * Done! kubectl is now configured to use "no-preload-956479" cluster and "default" namespace by default
	I0826 12:16:00.905372  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:00.905692  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:00.905720  152982 kubeadm.go:310] 
	I0826 12:16:00.905753  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:16:00.905784  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:16:00.905791  152982 kubeadm.go:310] 
	I0826 12:16:00.905819  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:16:00.905877  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:16:00.906033  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:16:00.906050  152982 kubeadm.go:310] 
	I0826 12:16:00.906190  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:16:00.906257  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:16:00.906304  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:16:00.906311  152982 kubeadm.go:310] 
	I0826 12:16:00.906444  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:16:00.906687  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:16:00.906700  152982 kubeadm.go:310] 
	I0826 12:16:00.906794  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:16:00.906945  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:16:00.907050  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:16:00.907167  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:16:00.907184  152982 kubeadm.go:310] 
	I0826 12:16:00.907768  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:16:00.907869  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:16:00.907959  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 12:16:00.908103  152982 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 12:16:00.908168  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:16:01.392633  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:16:01.408303  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:16:01.419069  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:16:01.419104  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:16:01.419162  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:16:01.429440  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:16:01.429513  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:16:01.440092  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:16:01.450451  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:16:01.450528  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:16:01.461166  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.472084  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:16:01.472155  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.482791  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:16:01.493636  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:16:01.493737  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:16:01.504679  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:16:01.576700  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:16:01.576854  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:16:01.728501  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:16:01.728682  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:16:01.728846  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:16:01.928072  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:16:01.929877  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:16:01.929988  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:16:01.930128  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:16:01.930271  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:16:01.930373  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:16:01.930484  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:16:01.930593  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:16:01.930680  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:16:01.930766  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:16:01.931012  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:16:01.931363  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:16:01.931414  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:16:01.931593  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:16:02.054133  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:16:02.301995  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:16:02.372665  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:16:02.823940  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:16:02.844516  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:16:02.844641  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:16:02.844724  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:16:02.995838  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:16:02.997571  152982 out.go:235]   - Booting up control plane ...
	I0826 12:16:02.997707  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:16:02.999055  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:16:03.000691  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:16:03.010427  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:16:03.013494  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:16:43.016147  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:16:43.016271  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:43.016481  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:48.016709  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:48.016976  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:58.017776  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:58.018006  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:18.018369  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:18.018592  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.017759  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:58.018053  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.018084  152982 kubeadm.go:310] 
	I0826 12:17:58.018121  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:17:58.018157  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:17:58.018163  152982 kubeadm.go:310] 
	I0826 12:17:58.018192  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:17:58.018224  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:17:58.018310  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:17:58.018337  152982 kubeadm.go:310] 
	I0826 12:17:58.018477  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:17:58.018537  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:17:58.018619  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:17:58.018633  152982 kubeadm.go:310] 
	I0826 12:17:58.018723  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:17:58.018810  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:17:58.018820  152982 kubeadm.go:310] 
	I0826 12:17:58.019007  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:17:58.019157  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:17:58.019291  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:17:58.019403  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:17:58.019414  152982 kubeadm.go:310] 
	I0826 12:17:58.020426  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:17:58.020541  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:17:58.020627  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 12:17:58.020705  152982 kubeadm.go:394] duration metric: took 7m57.559327665s to StartCluster
	I0826 12:17:58.020799  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:17:58.020875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:17:58.061950  152982 cri.go:89] found id: ""
	I0826 12:17:58.061979  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.061989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:17:58.061998  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:17:58.062057  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:17:58.100419  152982 cri.go:89] found id: ""
	I0826 12:17:58.100451  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.100465  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:17:58.100474  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:17:58.100536  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:17:58.135329  152982 cri.go:89] found id: ""
	I0826 12:17:58.135360  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.135369  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:17:58.135378  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:17:58.135472  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:17:58.169826  152982 cri.go:89] found id: ""
	I0826 12:17:58.169858  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.169870  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:17:58.169888  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:17:58.169958  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:17:58.204549  152982 cri.go:89] found id: ""
	I0826 12:17:58.204583  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.204593  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:17:58.204600  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:17:58.204668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:17:58.241886  152982 cri.go:89] found id: ""
	I0826 12:17:58.241917  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.241926  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:17:58.241933  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:17:58.241997  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:17:58.276159  152982 cri.go:89] found id: ""
	I0826 12:17:58.276194  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.276206  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:17:58.276220  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:17:58.276288  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:17:58.311319  152982 cri.go:89] found id: ""
	I0826 12:17:58.311352  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.311364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:17:58.311377  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:17:58.311394  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:17:58.365300  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:17:58.365352  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:17:58.378933  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:17:58.378972  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:17:58.464890  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:17:58.464920  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:17:58.464939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:17:58.581032  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:17:58.581076  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0826 12:17:58.633835  152982 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 12:17:58.633919  152982 out.go:270] * 
	W0826 12:17:58.634025  152982 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.634049  152982 out.go:270] * 
	W0826 12:17:58.635201  152982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:17:58.639004  152982 out.go:201] 
	W0826 12:17:58.640230  152982 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.640308  152982 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 12:17:58.640327  152982 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 12:17:58.641876  152982 out.go:201] 
	
	
	==> CRI-O <==
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.156914611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c402ce66-1406-4c62-9188-1a811f3d41b3 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.157792572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=023a5026-247c-49bd-8795-b972c3a9a0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.158167931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675099158138147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=023a5026-247c-49bd-8795-b972c3a9a0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.158582423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb2d46fa-d9cf-488b-b7ff-ffabe1221840 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.158644857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb2d46fa-d9cf-488b-b7ff-ffabe1221840 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.159024570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4,PodSandboxId:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674550706573722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202,PodSandboxId:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550245085106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0,PodSandboxId:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550068524449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
cfb870-46aa-4ec1-b958-707896e53120,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b,PodSandboxId:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724674549594611318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981,PodSandboxId:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674538867793972,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340,PodSandboxId:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674538873045048,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4,PodSandboxId:8772aac82dc9becc39dd4c3f23175ca78021164a7a97391fdbe4d18fc6074a90,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674538811016776,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb,PodSandboxId:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674538772351568,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394,PodSandboxId:826b836ede432ffeb4cecf8cfff45582044a10ba5146b3574790c5273cedba0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674251468082178,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb2d46fa-d9cf-488b-b7ff-ffabe1221840 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.180116537Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=edaee7a7-44dd-4352-9fa8-aa1fc22dcc90 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.180391615Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b0640b7f-39d3-4fb1-b78c-2f1f970646ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674550578786183,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-26T12:15:50.270654963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2153a5d506441c2dfe3a5fddd5f845ad1c74c19c88b1ac83a30ef59ad33eda5,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-gmfbr,Uid:558889e1-e85a-45ef-9636-892204c4cf48,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674550166397984,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-gmfbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558889e1-e85a-45ef-9636-892204c4cf48
,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:15:49.853184685Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-8489w,Uid:2bcfb870-46aa-4ec1-b958-707896e53120,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674549594124622,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcfb870-46aa-4ec1-b958-707896e53120,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:15:49.285192485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-wnd26,Uid:94b517df-9201-4602-
a58f-77617a38d641,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674549558540051,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:15:49.251563909Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&PodSandboxMetadata{Name:kube-proxy-gwj5w,Uid:18bfe796-2c64-420d-a01d-ea68c56573c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674549335905084,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:15:49.014214683Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-956479,Uid:c858f6a584517160d9207cc49df9c77b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724674538620889711,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.213:8443,kubernetes.io/config.hash: c858f6a584517160d9207cc49df9c77b,kubernetes.io/config.seen: 2024-08-26T12:15:38.168373687Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8772aac82dc9becc39dd4c3f23175ca
78021164a7a97391fdbe4d18fc6074a90,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-956479,Uid:6880a49a44beb7e7c7e14fe0baab6d74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674538616100343,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.213:2379,kubernetes.io/config.hash: 6880a49a44beb7e7c7e14fe0baab6d74,kubernetes.io/config.seen: 2024-08-26T12:15:38.168372521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-956479,Uid:87fed30611b82eae5e5fa8ea1240838d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674538603318874,Labels:map[str
ing]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87fed30611b82eae5e5fa8ea1240838d,kubernetes.io/config.seen: 2024-08-26T12:15:38.168365679Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-956479,Uid:e28c62c00ab6b72465e92210eaf48849,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674538598779652,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: e28c62c00ab6b72465e92210eaf48849,kubernetes.io/config.seen: 2024-08-26T12:15:38.168370232Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:826b836ede432ffeb4cecf8cfff45582044a10ba5146b3574790c5273cedba0d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-956479,Uid:c858f6a584517160d9207cc49df9c77b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724674251230300172,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.213:8443,kubernetes.io/config.hash: c858f6a584517160d9207cc49df9c77b,kubernetes.io/config.seen: 2024-08-26T12:10:50.737119614Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=edaee7a7-44dd-4352-9fa8-aa1fc22dcc90 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.181050648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee467109-29c0-4b1a-90d3-b63e7a88fe5a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.181106726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee467109-29c0-4b1a-90d3-b63e7a88fe5a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.181311627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4,PodSandboxId:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674550706573722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202,PodSandboxId:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550245085106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0,PodSandboxId:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550068524449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
cfb870-46aa-4ec1-b958-707896e53120,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b,PodSandboxId:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724674549594611318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981,PodSandboxId:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674538867793972,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340,PodSandboxId:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674538873045048,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4,PodSandboxId:8772aac82dc9becc39dd4c3f23175ca78021164a7a97391fdbe4d18fc6074a90,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674538811016776,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb,PodSandboxId:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674538772351568,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394,PodSandboxId:826b836ede432ffeb4cecf8cfff45582044a10ba5146b3574790c5273cedba0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674251468082178,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee467109-29c0-4b1a-90d3-b63e7a88fe5a name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.202173165Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc6513dc-7664-4d9c-9e4a-3bf1e096dddc name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.202258985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc6513dc-7664-4d9c-9e4a-3bf1e096dddc name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.203293155Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62dee0b6-5841-4063-abcc-1f80b86fb8a3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.203651260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675099203629952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62dee0b6-5841-4063-abcc-1f80b86fb8a3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.204130047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23697931-ec3b-45c0-ab94-77d7ec32be84 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.204227915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23697931-ec3b-45c0-ab94-77d7ec32be84 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.204518545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4,PodSandboxId:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674550706573722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202,PodSandboxId:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550245085106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0,PodSandboxId:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550068524449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
cfb870-46aa-4ec1-b958-707896e53120,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b,PodSandboxId:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724674549594611318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981,PodSandboxId:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674538867793972,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340,PodSandboxId:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674538873045048,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4,PodSandboxId:8772aac82dc9becc39dd4c3f23175ca78021164a7a97391fdbe4d18fc6074a90,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674538811016776,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb,PodSandboxId:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674538772351568,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394,PodSandboxId:826b836ede432ffeb4cecf8cfff45582044a10ba5146b3574790c5273cedba0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674251468082178,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23697931-ec3b-45c0-ab94-77d7ec32be84 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.239965631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56c3c18d-5615-4840-934e-5f03a737fa40 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.240056640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56c3c18d-5615-4840-934e-5f03a737fa40 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.240948005Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b856462-6d8d-4452-afd0-d0209620749b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.241295147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675099241271328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b856462-6d8d-4452-afd0-d0209620749b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.241807632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=347676c3-0314-4939-8957-63e5426ed9eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.241896088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=347676c3-0314-4939-8957-63e5426ed9eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:24:59 no-preload-956479 crio[728]: time="2024-08-26 12:24:59.242117709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4,PodSandboxId:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674550706573722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202,PodSandboxId:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550245085106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0,PodSandboxId:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550068524449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
cfb870-46aa-4ec1-b958-707896e53120,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b,PodSandboxId:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724674549594611318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981,PodSandboxId:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674538867793972,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340,PodSandboxId:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674538873045048,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4,PodSandboxId:8772aac82dc9becc39dd4c3f23175ca78021164a7a97391fdbe4d18fc6074a90,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674538811016776,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb,PodSandboxId:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674538772351568,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394,PodSandboxId:826b836ede432ffeb4cecf8cfff45582044a10ba5146b3574790c5273cedba0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674251468082178,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=347676c3-0314-4939-8957-63e5426ed9eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1cb06f1e6077d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   1e650c98bccdb       storage-provisioner
	4c5433ef50979       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2997d8433b416       coredns-6f6b679f8f-wnd26
	d7f33f4691468       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0e1006f2fd77e       coredns-6f6b679f8f-8489w
	2f7d6667cb757       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   f6061a901467f       kube-proxy-gwj5w
	42327b5ac7970       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   e2502d0452fa2       kube-apiserver-no-preload-956479
	2f6478fc5d177       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   29558bf11d3b5       kube-scheduler-no-preload-956479
	a1149aeff78c8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   8772aac82dc9b       etcd-no-preload-956479
	6e0bad9bca873       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   86c389187db4d       kube-controller-manager-no-preload-956479
	2aae61c21df49       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   826b836ede432       kube-apiserver-no-preload-956479
	
	
	==> coredns [4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-956479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-956479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=no-preload-956479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T12_15_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 12:15:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-956479
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:24:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:21:00 +0000   Mon, 26 Aug 2024 12:15:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:21:00 +0000   Mon, 26 Aug 2024 12:15:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:21:00 +0000   Mon, 26 Aug 2024 12:15:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:21:00 +0000   Mon, 26 Aug 2024 12:15:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.213
	  Hostname:    no-preload-956479
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f9d2e930ac64583a71d5c8ed83b972c
	  System UUID:                0f9d2e93-0ac6-4583-a71d-5c8ed83b972c
	  Boot ID:                    ec17325c-254e-4dd8-a77b-56f28d12a1f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-8489w                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-6f6b679f8f-wnd26                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-no-preload-956479                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-no-preload-956479             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-no-preload-956479    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-gwj5w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-no-preload-956479             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-gmfbr              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node no-preload-956479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node no-preload-956479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node no-preload-956479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s  node-controller  Node no-preload-956479 event: Registered Node no-preload-956479 in Controller
	
	
	==> dmesg <==
	[  +0.052280] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038819] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.067980] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.974609] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.443330] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.392902] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.074602] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069160] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.202677] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.121597] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.297557] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[ +15.783261] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.063537] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.605503] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +3.557868] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.859554] kauditd_printk_skb: 91 callbacks suppressed
	[Aug26 12:15] systemd-fstab-generator[3073]: Ignoring "noauto" option for root device
	[  +0.067447] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.999881] systemd-fstab-generator[3397]: Ignoring "noauto" option for root device
	[  +0.083516] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.787589] systemd-fstab-generator[3522]: Ignoring "noauto" option for root device
	[  +0.842382] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.391489] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4] <==
	{"level":"info","ts":"2024-08-26T12:15:39.301421Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-26T12:15:39.301485Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.213:2380"}
	{"level":"info","ts":"2024-08-26T12:15:39.301996Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.213:2380"}
	{"level":"info","ts":"2024-08-26T12:15:39.303150Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"afd31c34526e5864","initial-advertise-peer-urls":["https://192.168.50.213:2380"],"listen-peer-urls":["https://192.168.50.213:2380"],"advertise-client-urls":["https://192.168.50.213:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.213:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-26T12:15:39.309768Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-26T12:15:39.794868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-26T12:15:39.794959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-26T12:15:39.795040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgPreVoteResp from afd31c34526e5864 at term 1"}
	{"level":"info","ts":"2024-08-26T12:15:39.795106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became candidate at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:39.795126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgVoteResp from afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:39.795138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became leader at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:39.795147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: afd31c34526e5864 elected leader afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:39.799976Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:39.804046Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"afd31c34526e5864","local-member-attributes":"{Name:no-preload-956479 ClientURLs:[https://192.168.50.213:2379]}","request-path":"/0/members/afd31c34526e5864/attributes","cluster-id":"64fdbb8e23141dc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T12:15:39.804096Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:15:39.805057Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:15:39.815007Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:15:39.818722Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64fdbb8e23141dc5","local-member-id":"afd31c34526e5864","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:39.818869Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:39.818896Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:39.820344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.213:2379"}
	{"level":"info","ts":"2024-08-26T12:15:39.816840Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T12:15:39.823094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T12:15:39.818433Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:15:39.825158Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:24:59 up 14 min,  0 users,  load average: 0.14, 0.20, 0.17
	Linux no-preload-956479 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394] <==
	W0826 12:15:31.560591       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.574395       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.594996       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.635171       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.650888       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.669709       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.713234       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.719933       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.722519       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.724066       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.926024       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.947165       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.075241       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.087035       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.269054       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.283013       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.424180       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.622810       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.927198       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:34.724018       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.086082       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.089865       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.183644       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.221791       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.306035       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340] <==
	W0826 12:20:42.371425       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:20:42.371483       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:20:42.372766       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:20:42.372772       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:21:42.373420       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:21:42.373527       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0826 12:21:42.373473       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:21:42.373822       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:21:42.374702       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:21:42.375876       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:23:42.375172       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:23:42.375282       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0826 12:23:42.376015       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:23:42.376099       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:23:42.376995       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:23:42.378074       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb] <==
	E0826 12:19:48.412455       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:19:48.872806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:20:18.419041       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:20:18.881296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:20:48.426271       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:20:48.890008       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:21:00.581256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-956479"
	E0826 12:21:18.433823       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:21:18.897502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:21:36.106219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="316.345µs"
	E0826 12:21:48.441434       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:21:48.907264       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:21:51.099207       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="97.06µs"
	E0826 12:22:18.448662       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:22:18.918435       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:22:48.455690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:22:48.928296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:23:18.462572       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:23:18.936861       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:23:48.470402       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:23:48.947074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:24:18.477532       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:24:18.957684       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:24:48.485246       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:24:48.966509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 12:15:50.569922       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 12:15:50.607524       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.213"]
	E0826 12:15:50.607624       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 12:15:50.659546       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 12:15:50.659669       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 12:15:50.659715       1 server_linux.go:169] "Using iptables Proxier"
	I0826 12:15:50.669431       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 12:15:50.669829       1 server.go:483] "Version info" version="v1.31.0"
	I0826 12:15:50.669864       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:15:50.671362       1 config.go:197] "Starting service config controller"
	I0826 12:15:50.671514       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 12:15:50.671472       1 config.go:104] "Starting endpoint slice config controller"
	I0826 12:15:50.671623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 12:15:50.672208       1 config.go:326] "Starting node config controller"
	I0826 12:15:50.672548       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 12:15:50.772760       1 shared_informer.go:320] Caches are synced for service config
	I0826 12:15:50.772826       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 12:15:50.773131       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981] <==
	W0826 12:15:41.462239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:41.462266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:41.462328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 12:15:41.462354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:41.462399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:41.462425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.265636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:42.265691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.326948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 12:15:42.327001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.327134       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 12:15:42.327200       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 12:15:42.331351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:42.331405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.393581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0826 12:15:42.393684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.494606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0826 12:15:42.494657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.616060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 12:15:42.616204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.753247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 12:15:42.753394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.807202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 12:15:42.807690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0826 12:15:44.147362       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:23:46 no-preload-956479 kubelet[3404]: E0826 12:23:46.084372    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:23:54 no-preload-956479 kubelet[3404]: E0826 12:23:54.273389    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675034272547347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:23:54 no-preload-956479 kubelet[3404]: E0826 12:23:54.274117    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675034272547347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:01 no-preload-956479 kubelet[3404]: E0826 12:24:01.082594    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:24:04 no-preload-956479 kubelet[3404]: E0826 12:24:04.276195    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675044275788301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:04 no-preload-956479 kubelet[3404]: E0826 12:24:04.276661    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675044275788301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:14 no-preload-956479 kubelet[3404]: E0826 12:24:14.278477    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675054277937142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:14 no-preload-956479 kubelet[3404]: E0826 12:24:14.278517    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675054277937142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:16 no-preload-956479 kubelet[3404]: E0826 12:24:16.082306    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:24:24 no-preload-956479 kubelet[3404]: E0826 12:24:24.279869    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675064279491900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:24 no-preload-956479 kubelet[3404]: E0826 12:24:24.280384    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675064279491900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:31 no-preload-956479 kubelet[3404]: E0826 12:24:31.081845    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:24:34 no-preload-956479 kubelet[3404]: E0826 12:24:34.283373    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675074282828606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:34 no-preload-956479 kubelet[3404]: E0826 12:24:34.283456    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675074282828606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:44 no-preload-956479 kubelet[3404]: E0826 12:24:44.149322    3404 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 12:24:44 no-preload-956479 kubelet[3404]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 12:24:44 no-preload-956479 kubelet[3404]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 12:24:44 no-preload-956479 kubelet[3404]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 12:24:44 no-preload-956479 kubelet[3404]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 12:24:44 no-preload-956479 kubelet[3404]: E0826 12:24:44.286111    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675084285226062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:44 no-preload-956479 kubelet[3404]: E0826 12:24:44.286281    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675084285226062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:46 no-preload-956479 kubelet[3404]: E0826 12:24:46.083503    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:24:54 no-preload-956479 kubelet[3404]: E0826 12:24:54.287712    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675094287239034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:54 no-preload-956479 kubelet[3404]: E0826 12:24:54.287782    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675094287239034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:24:58 no-preload-956479 kubelet[3404]: E0826 12:24:58.083151    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	
	
	==> storage-provisioner [1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4] <==
	I0826 12:15:50.791670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 12:15:50.813484       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 12:15:50.813673       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 12:15:50.823654       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 12:15:50.823899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-956479_e8265ada-0674-4eb5-8dde-f2566602131e!
	I0826 12:15:50.825150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47b77d2e-e671-41ab-a057-7c43e509713c", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-956479_e8265ada-0674-4eb5-8dde-f2566602131e became leader
	I0826 12:15:50.924050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-956479_e8265ada-0674-4eb5-8dde-f2566602131e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956479 -n no-preload-956479
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-956479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gmfbr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-956479 describe pod metrics-server-6867b74b74-gmfbr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-956479 describe pod metrics-server-6867b74b74-gmfbr: exit status 1 (67.499125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gmfbr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-956479 describe pod metrics-server-6867b74b74-gmfbr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
E0826 12:19:34.327329  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
E0826 12:22:20.476937  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
E0826 12:22:37.402178  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
E0826 12:24:34.327133  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 2 (235.376028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-839656" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 2 (248.112097ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-839656 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-839656 logs -n 25: (1.68231806s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-585941                                        | pause-585941                 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:01 UTC | 26 Aug 24 12:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956479             | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-923586            | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148783 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	|         | disable-driver-mounts-148783                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:04 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-839656        | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-697869  | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956479                  | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-923586                 | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-839656             | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697869       | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC | 26 Aug 24 12:15 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:06:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:06:55.804794  153366 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:06:55.805114  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805125  153366 out.go:358] Setting ErrFile to fd 2...
	I0826 12:06:55.805129  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805378  153366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:06:55.806009  153366 out.go:352] Setting JSON to false
	I0826 12:06:55.806989  153366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6557,"bootTime":1724667459,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:06:55.807056  153366 start.go:139] virtualization: kvm guest
	I0826 12:06:55.809200  153366 out.go:177] * [default-k8s-diff-port-697869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:06:55.810757  153366 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:06:55.810779  153366 notify.go:220] Checking for updates...
	I0826 12:06:55.813352  153366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:06:55.814876  153366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:06:55.816231  153366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:06:55.817536  153366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:06:55.819049  153366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:06:55.820974  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:06:55.821368  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.821428  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.837973  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0826 12:06:55.838484  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.839113  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.839132  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.839537  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.839758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.840059  153366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:06:55.840392  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.840446  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.855990  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0826 12:06:55.856535  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.857044  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.857070  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.857398  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.857606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.892165  153366 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:06:55.893462  153366 start.go:297] selected driver: kvm2
	I0826 12:06:55.893491  153366 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.893612  153366 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:06:55.894295  153366 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.894372  153366 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:06:55.911403  153366 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:06:55.911782  153366 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:06:55.911825  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:06:55.911833  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:06:55.911942  153366 start.go:340] cluster config:
	{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.912047  153366 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.914819  153366 out.go:177] * Starting "default-k8s-diff-port-697869" primary control-plane node in "default-k8s-diff-port-697869" cluster
	I0826 12:06:58.095139  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:06:55.916120  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:06:55.916158  153366 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:06:55.916168  153366 cache.go:56] Caching tarball of preloaded images
	I0826 12:06:55.916249  153366 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:06:55.916260  153366 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:06:55.916361  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:06:55.916578  153366 start.go:360] acquireMachinesLock for default-k8s-diff-port-697869: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:07:01.167159  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:07.247157  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:10.319093  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:16.399177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:19.471168  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:25.551154  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:28.623156  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:34.703152  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:37.775237  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:43.855164  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:46.927177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:53.007138  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:56.079172  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:02.159134  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:05.231114  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:11.311126  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:14.383170  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:20.463130  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:23.535190  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:29.615145  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:32.687246  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:38.767150  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:41.839214  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:47.919149  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:50.991177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:57.071142  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:00.143127  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:06.223158  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:09.295167  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:12.299677  152550 start.go:364] duration metric: took 4m34.363707329s to acquireMachinesLock for "embed-certs-923586"
	I0826 12:09:12.299740  152550 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:12.299746  152550 fix.go:54] fixHost starting: 
	I0826 12:09:12.300074  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:12.300107  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:12.316195  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0826 12:09:12.316679  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:12.317193  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:09:12.317222  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:12.317544  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:12.317738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:12.317890  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:09:12.319718  152550 fix.go:112] recreateIfNeeded on embed-certs-923586: state=Stopped err=<nil>
	I0826 12:09:12.319757  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	W0826 12:09:12.319928  152550 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:12.322756  152550 out.go:177] * Restarting existing kvm2 VM for "embed-certs-923586" ...
	I0826 12:09:12.324242  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Start
	I0826 12:09:12.324436  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring networks are active...
	I0826 12:09:12.325340  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network default is active
	I0826 12:09:12.325727  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network mk-embed-certs-923586 is active
	I0826 12:09:12.326016  152550 main.go:141] libmachine: (embed-certs-923586) Getting domain xml...
	I0826 12:09:12.326704  152550 main.go:141] libmachine: (embed-certs-923586) Creating domain...
	I0826 12:09:12.297008  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:12.297049  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297404  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:09:12.297433  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297769  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:09:12.299520  152463 machine.go:96] duration metric: took 4m37.402469334s to provisionDockerMachine
	I0826 12:09:12.299563  152463 fix.go:56] duration metric: took 4m37.426061512s for fixHost
	I0826 12:09:12.299570  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 4m37.426083406s
	W0826 12:09:12.299602  152463 start.go:714] error starting host: provision: host is not running
	W0826 12:09:12.299700  152463 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0826 12:09:12.299714  152463 start.go:729] Will try again in 5 seconds ...
	I0826 12:09:13.587774  152550 main.go:141] libmachine: (embed-certs-923586) Waiting to get IP...
	I0826 12:09:13.588804  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.589502  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.589606  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.589472  153863 retry.go:31] will retry after 233.612197ms: waiting for machine to come up
	I0826 12:09:13.825289  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.825694  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.825716  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.825640  153863 retry.go:31] will retry after 278.757003ms: waiting for machine to come up
	I0826 12:09:14.106215  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.106555  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.106604  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.106513  153863 retry.go:31] will retry after 438.455545ms: waiting for machine to come up
	I0826 12:09:14.546036  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.546434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.546461  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.546390  153863 retry.go:31] will retry after 471.25312ms: waiting for machine to come up
	I0826 12:09:15.019018  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.019413  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.019441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.019398  153863 retry.go:31] will retry after 547.251596ms: waiting for machine to come up
	I0826 12:09:15.568156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.568417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.568446  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.568366  153863 retry.go:31] will retry after 602.422279ms: waiting for machine to come up
	I0826 12:09:16.172056  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:16.172588  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:16.172613  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:16.172520  153863 retry.go:31] will retry after 990.562884ms: waiting for machine to come up
	I0826 12:09:17.164920  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:17.165417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:17.165441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:17.165361  153863 retry.go:31] will retry after 1.291254906s: waiting for machine to come up
	I0826 12:09:17.301413  152463 start.go:360] acquireMachinesLock for no-preload-956479: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:09:18.458402  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:18.458881  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:18.458913  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:18.458796  153863 retry.go:31] will retry after 1.757955514s: waiting for machine to come up
	I0826 12:09:20.218876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:20.219306  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:20.219329  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:20.219276  153863 retry.go:31] will retry after 1.629705685s: waiting for machine to come up
	I0826 12:09:21.850442  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:21.850858  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:21.850889  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:21.850800  153863 retry.go:31] will retry after 2.281035685s: waiting for machine to come up
	I0826 12:09:24.133867  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:24.134245  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:24.134273  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:24.134193  153863 retry.go:31] will retry after 3.498910639s: waiting for machine to come up
	I0826 12:09:27.635304  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:27.635727  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:27.635762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:27.635665  153863 retry.go:31] will retry after 3.250723751s: waiting for machine to come up
	I0826 12:09:32.191598  152982 start.go:364] duration metric: took 3m50.364189217s to acquireMachinesLock for "old-k8s-version-839656"
	I0826 12:09:32.191690  152982 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:32.191702  152982 fix.go:54] fixHost starting: 
	I0826 12:09:32.192120  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:32.192160  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:32.209470  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0826 12:09:32.209924  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:32.210423  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:09:32.210446  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:32.210781  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:32.210982  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:32.211153  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetState
	I0826 12:09:32.212801  152982 fix.go:112] recreateIfNeeded on old-k8s-version-839656: state=Stopped err=<nil>
	I0826 12:09:32.212839  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	W0826 12:09:32.213022  152982 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:32.215081  152982 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-839656" ...
	I0826 12:09:30.890060  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890595  152550 main.go:141] libmachine: (embed-certs-923586) Found IP for machine: 192.168.39.6
	I0826 12:09:30.890628  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has current primary IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890642  152550 main.go:141] libmachine: (embed-certs-923586) Reserving static IP address...
	I0826 12:09:30.891114  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.891138  152550 main.go:141] libmachine: (embed-certs-923586) DBG | skip adding static IP to network mk-embed-certs-923586 - found existing host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"}
	I0826 12:09:30.891148  152550 main.go:141] libmachine: (embed-certs-923586) Reserved static IP address: 192.168.39.6
	I0826 12:09:30.891160  152550 main.go:141] libmachine: (embed-certs-923586) Waiting for SSH to be available...
	I0826 12:09:30.891171  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Getting to WaitForSSH function...
	I0826 12:09:30.893189  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893470  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.893500  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893616  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH client type: external
	I0826 12:09:30.893640  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa (-rw-------)
	I0826 12:09:30.893682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:30.893696  152550 main.go:141] libmachine: (embed-certs-923586) DBG | About to run SSH command:
	I0826 12:09:30.893714  152550 main.go:141] libmachine: (embed-certs-923586) DBG | exit 0
	I0826 12:09:31.014809  152550 main.go:141] libmachine: (embed-certs-923586) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:31.015188  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetConfigRaw
	I0826 12:09:31.015829  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.018458  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.018812  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.018855  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.019100  152550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/config.json ...
	I0826 12:09:31.019329  152550 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:31.019348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.019561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.021826  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022132  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.022156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.022460  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022622  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022733  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.022906  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.023108  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.023121  152550 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:31.123039  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:31.123080  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123410  152550 buildroot.go:166] provisioning hostname "embed-certs-923586"
	I0826 12:09:31.123443  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.126455  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126777  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.126814  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126922  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.127161  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127351  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127522  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.127719  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.127909  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.127924  152550 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-923586 && echo "embed-certs-923586" | sudo tee /etc/hostname
	I0826 12:09:31.240946  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-923586
	
	I0826 12:09:31.240981  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.243695  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244041  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.244079  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244240  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.244453  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244617  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244742  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.244900  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.245095  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.245113  152550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-923586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-923586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-923586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:31.355875  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:31.355909  152550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:31.355933  152550 buildroot.go:174] setting up certificates
	I0826 12:09:31.355947  152550 provision.go:84] configureAuth start
	I0826 12:09:31.355960  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.356300  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.359092  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.359407  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359596  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.362078  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362396  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.362429  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362538  152550 provision.go:143] copyHostCerts
	I0826 12:09:31.362632  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:31.362656  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:31.362743  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:31.362888  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:31.362900  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:31.362939  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:31.363021  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:31.363031  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:31.363065  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:31.363135  152550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.embed-certs-923586 san=[127.0.0.1 192.168.39.6 embed-certs-923586 localhost minikube]
	I0826 12:09:31.549410  152550 provision.go:177] copyRemoteCerts
	I0826 12:09:31.549482  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:31.549517  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.552293  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552647  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.552681  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552914  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.553119  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.553276  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.553416  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:31.633032  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:31.657117  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:09:31.680707  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:31.703441  152550 provision.go:87] duration metric: took 347.478825ms to configureAuth
	I0826 12:09:31.703477  152550 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:31.703678  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:09:31.703752  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.706384  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.706876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.706909  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.707110  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.707364  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707762  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.708005  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.708232  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.708252  152550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:31.963380  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:31.963417  152550 machine.go:96] duration metric: took 944.071305ms to provisionDockerMachine
	I0826 12:09:31.963435  152550 start.go:293] postStartSetup for "embed-certs-923586" (driver="kvm2")
	I0826 12:09:31.963452  152550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:31.963481  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.963878  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:31.963913  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.966558  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.966981  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.967010  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.967186  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.967413  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.967587  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.967732  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.049232  152550 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:32.053165  152550 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:32.053195  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:32.053278  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:32.053378  152550 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:32.053495  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:32.062420  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:32.085277  152550 start.go:296] duration metric: took 121.824784ms for postStartSetup
	I0826 12:09:32.085335  152550 fix.go:56] duration metric: took 19.785587858s for fixHost
	I0826 12:09:32.085362  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.088039  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088332  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.088360  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088560  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.088832  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089012  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089191  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.089365  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:32.089529  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:32.089539  152550 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:32.191413  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674172.168471460
	
	I0826 12:09:32.191440  152550 fix.go:216] guest clock: 1724674172.168471460
	I0826 12:09:32.191450  152550 fix.go:229] Guest: 2024-08-26 12:09:32.16847146 +0000 UTC Remote: 2024-08-26 12:09:32.085340981 +0000 UTC m=+294.301169364 (delta=83.130479ms)
	I0826 12:09:32.191485  152550 fix.go:200] guest clock delta is within tolerance: 83.130479ms
	I0826 12:09:32.191493  152550 start.go:83] releasing machines lock for "embed-certs-923586", held for 19.891774014s
	I0826 12:09:32.191526  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.191861  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:32.194589  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.194980  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.195019  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.195207  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.195866  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196071  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196167  152550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:32.196288  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.196319  152550 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:32.196348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.199088  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199546  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.199598  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199776  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.199977  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200105  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.200124  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.200148  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200317  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.200367  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.200482  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200663  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200824  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.285244  152550 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:32.317027  152550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:32.466233  152550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:32.472677  152550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:32.472768  152550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:32.490080  152550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:32.490111  152550 start.go:495] detecting cgroup driver to use...
	I0826 12:09:32.490189  152550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:32.509031  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:32.524361  152550 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:32.524417  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:32.539259  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:32.553276  152550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:32.676018  152550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:32.833702  152550 docker.go:233] disabling docker service ...
	I0826 12:09:32.833779  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:32.851253  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:32.865578  152550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:33.000922  152550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:33.129916  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:33.144209  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:33.162946  152550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:09:33.163010  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.174271  152550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:33.174360  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.189085  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.204388  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.218151  152550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:33.234931  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.257016  152550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.280905  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.293033  152550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:33.303161  152550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:33.303235  152550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:33.316560  152550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:33.326319  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:33.449279  152550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:33.587642  152550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:33.587722  152550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:33.592423  152550 start.go:563] Will wait 60s for crictl version
	I0826 12:09:33.592495  152550 ssh_runner.go:195] Run: which crictl
	I0826 12:09:33.596628  152550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:33.633109  152550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:33.633225  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.661128  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.692222  152550 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:09:32.216396  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .Start
	I0826 12:09:32.216630  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring networks are active...
	I0826 12:09:32.217414  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network default is active
	I0826 12:09:32.217851  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network mk-old-k8s-version-839656 is active
	I0826 12:09:32.218286  152982 main.go:141] libmachine: (old-k8s-version-839656) Getting domain xml...
	I0826 12:09:32.219128  152982 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 12:09:33.500501  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting to get IP...
	I0826 12:09:33.501678  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.502100  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.502202  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.502072  154009 retry.go:31] will retry after 193.282008ms: waiting for machine to come up
	I0826 12:09:33.697223  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.697688  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.697760  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.697669  154009 retry.go:31] will retry after 252.110347ms: waiting for machine to come up
	I0826 12:09:33.951330  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.952639  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.952677  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.952616  154009 retry.go:31] will retry after 436.954293ms: waiting for machine to come up
	I0826 12:09:34.391109  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.391724  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.391759  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.391676  154009 retry.go:31] will retry after 402.13367ms: waiting for machine to come up
	I0826 12:09:34.795471  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.796036  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.796060  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.795991  154009 retry.go:31] will retry after 738.867168ms: waiting for machine to come up
	I0826 12:09:35.537041  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:35.537518  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:35.537539  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:35.537476  154009 retry.go:31] will retry after 884.001928ms: waiting for machine to come up
	I0826 12:09:36.423984  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:36.424400  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:36.424432  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:36.424336  154009 retry.go:31] will retry after 958.887984ms: waiting for machine to come up
	I0826 12:09:33.693650  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:33.696950  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:33.697385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697661  152550 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:33.701975  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:33.715404  152550 kubeadm.go:883] updating cluster {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:33.715541  152550 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:09:33.715646  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:33.756477  152550 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:09:33.756546  152550 ssh_runner.go:195] Run: which lz4
	I0826 12:09:33.761027  152550 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:33.765139  152550 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:33.765181  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:09:35.106552  152550 crio.go:462] duration metric: took 1.345552742s to copy over tarball
	I0826 12:09:35.106656  152550 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:37.299491  152550 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.192805053s)
	I0826 12:09:37.299548  152550 crio.go:469] duration metric: took 2.192938832s to extract the tarball
	I0826 12:09:37.299560  152550 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:37.337654  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:37.378117  152550 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:09:37.378144  152550 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:09:37.378155  152550 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0826 12:09:37.378276  152550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-923586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:37.378375  152550 ssh_runner.go:195] Run: crio config
	I0826 12:09:37.438148  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:37.438182  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:37.438200  152550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:37.438229  152550 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-923586 NodeName:embed-certs-923586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:09:37.438436  152550 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-923586"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:37.438525  152550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:09:37.451742  152550 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:37.451824  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:37.463078  152550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0826 12:09:37.481563  152550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:37.499615  152550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0826 12:09:37.518753  152550 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:37.523612  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:37.535774  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:37.664131  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:37.681227  152550 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586 for IP: 192.168.39.6
	I0826 12:09:37.681254  152550 certs.go:194] generating shared ca certs ...
	I0826 12:09:37.681293  152550 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:37.681467  152550 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:37.681529  152550 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:37.681542  152550 certs.go:256] generating profile certs ...
	I0826 12:09:37.681665  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/client.key
	I0826 12:09:37.681751  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key.f0cd25f6
	I0826 12:09:37.681813  152550 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key
	I0826 12:09:37.681967  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:37.682018  152550 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:37.682029  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:37.682064  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:37.682100  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:37.682136  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:37.682199  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:37.683214  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:37.721802  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:37.756110  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:09:37.786038  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:09:37.818026  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0826 12:09:37.385261  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:37.385737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:37.385767  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:37.385679  154009 retry.go:31] will retry after 991.322442ms: waiting for machine to come up
	I0826 12:09:38.379002  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:38.379428  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:38.379457  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:38.379382  154009 retry.go:31] will retry after 1.199531339s: waiting for machine to come up
	I0826 12:09:39.581068  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:39.581551  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:39.581581  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:39.581506  154009 retry.go:31] will retry after 1.74680502s: waiting for machine to come up
	I0826 12:09:41.330775  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:41.331224  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:41.331254  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:41.331170  154009 retry.go:31] will retry after 2.648889988s: waiting for machine to come up
	I0826 12:09:37.843982  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:09:37.869902  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:09:37.893757  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:09:37.917320  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:09:37.940492  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:09:37.964211  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:09:37.987907  152550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:09:38.004414  152550 ssh_runner.go:195] Run: openssl version
	I0826 12:09:38.010144  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:09:38.020820  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025245  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025324  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.031174  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:09:38.041847  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:09:38.052764  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057501  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057591  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.063840  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:09:38.075173  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:09:38.085770  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089921  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089986  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.095373  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:09:38.105709  152550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:09:38.110189  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:09:38.115952  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:09:38.121463  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:09:38.127423  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:09:38.132968  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:09:38.138735  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:09:38.144517  152550 kubeadm.go:392] StartCluster: {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:09:38.144671  152550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:09:38.144748  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.179325  152550 cri.go:89] found id: ""
	I0826 12:09:38.179409  152550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:09:38.189261  152550 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:09:38.189296  152550 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:09:38.189368  152550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:09:38.198923  152550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:09:38.200065  152550 kubeconfig.go:125] found "embed-certs-923586" server: "https://192.168.39.6:8443"
	I0826 12:09:38.202145  152550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:09:38.211371  152550 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.6
	I0826 12:09:38.211415  152550 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:09:38.211431  152550 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:09:38.211501  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.245861  152550 cri.go:89] found id: ""
	I0826 12:09:38.245945  152550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:09:38.262469  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:09:38.272693  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:09:38.272721  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:09:38.272780  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:09:38.281704  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:09:38.281779  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:09:38.291042  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:09:38.299990  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:09:38.300057  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:09:38.309982  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.319474  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:09:38.319536  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.329345  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:09:38.338548  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:09:38.338649  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:09:38.349124  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:09:38.359112  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:38.470240  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.758142  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.28785788s)
	I0826 12:09:39.758180  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.973482  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.044459  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.143679  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:09:40.143844  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:40.644217  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.144357  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.160970  152550 api_server.go:72] duration metric: took 1.017300298s to wait for apiserver process to appear ...
	I0826 12:09:41.161005  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:09:41.161032  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.548928  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.548971  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.548988  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.580924  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.580991  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.661191  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.667248  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:43.667278  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.161959  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.177173  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.177216  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.661798  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.668406  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.668456  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:45.162005  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:45.168111  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:09:45.174487  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:09:45.174525  152550 api_server.go:131] duration metric: took 4.013513808s to wait for apiserver health ...
	I0826 12:09:45.174536  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:45.174543  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:45.176809  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:09:43.982234  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:43.982681  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:43.982714  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:43.982593  154009 retry.go:31] will retry after 2.916473093s: waiting for machine to come up
	I0826 12:09:45.178235  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:09:45.189704  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:09:45.250046  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:09:45.262420  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:09:45.262460  152550 system_pods.go:61] "coredns-6f6b679f8f-h4wmk" [39b276c0-68ef-4dc9-9f73-ee79c2c14625] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262467  152550 system_pods.go:61] "coredns-6f6b679f8f-l5z8f" [7e0082cc-2364-499c-bdb8-5f2ee7ee5fa7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262473  152550 system_pods.go:61] "etcd-embed-certs-923586" [06d68f69-a99f-4b34-87c7-e2fb80cdd886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:09:45.262481  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [2d0952e2-f5d9-49e8-b957-00f92dbbc436] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:09:45.262490  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [2e632e39-6249-40e3-82ab-74e820a84f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:09:45.262495  152550 system_pods.go:61] "kube-proxy-wfl6s" [9f690d4f-11ee-4e67-aa8a-2c3e304d699d] Running
	I0826 12:09:45.262500  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [47d66689-0a4c-4811-b4f0-2481034f1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:09:45.262505  152550 system_pods.go:61] "metrics-server-6867b74b74-cw5t8" [1bced435-db48-46d6-9c76-fb13050a7851] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:09:45.262510  152550 system_pods.go:61] "storage-provisioner" [259f7851-96da-42c3-aae3-35d13ec21573] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:09:45.262522  152550 system_pods.go:74] duration metric: took 12.449002ms to wait for pod list to return data ...
	I0826 12:09:45.262531  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:09:45.276323  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:09:45.276359  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:09:45.276372  152550 node_conditions.go:105] duration metric: took 13.836307ms to run NodePressure ...
	I0826 12:09:45.276389  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:45.558970  152550 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563147  152550 kubeadm.go:739] kubelet initialised
	I0826 12:09:45.563168  152550 kubeadm.go:740] duration metric: took 4.16477ms waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563176  152550 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:09:45.574933  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.581504  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581530  152550 pod_ready.go:82] duration metric: took 6.568456ms for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.581548  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581557  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.587904  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587935  152550 pod_ready.go:82] duration metric: took 6.368664ms for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.587945  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587956  152550 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.592416  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592440  152550 pod_ready.go:82] duration metric: took 4.475923ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.592448  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592453  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.654230  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654265  152550 pod_ready.go:82] duration metric: took 61.80344ms for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.654275  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654282  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:47.659899  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:46.902687  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:46.903209  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:46.903243  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:46.903150  154009 retry.go:31] will retry after 4.06528556s: waiting for machine to come up
	I0826 12:09:50.972745  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973257  152982 main.go:141] libmachine: (old-k8s-version-839656) Found IP for machine: 192.168.72.136
	I0826 12:09:50.973280  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserving static IP address...
	I0826 12:09:50.973297  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has current primary IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.973653  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | skip adding static IP to network mk-old-k8s-version-839656 - found existing host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"}
	I0826 12:09:50.973672  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserved static IP address: 192.168.72.136
	I0826 12:09:50.973693  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting for SSH to be available...
	I0826 12:09:50.973737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Getting to WaitForSSH function...
	I0826 12:09:50.976028  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976406  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.976438  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976544  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH client type: external
	I0826 12:09:50.976598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa (-rw-------)
	I0826 12:09:50.976622  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:50.976632  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | About to run SSH command:
	I0826 12:09:50.976642  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | exit 0
	I0826 12:09:51.107476  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:51.107964  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 12:09:51.108748  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.111740  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112251  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.112281  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112613  152982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 12:09:51.112820  152982 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:51.112842  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.113094  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.115616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116011  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.116042  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116213  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.116382  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116483  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116618  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.116815  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.117105  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.117120  152982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:51.219189  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:51.219220  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219528  152982 buildroot.go:166] provisioning hostname "old-k8s-version-839656"
	I0826 12:09:51.219558  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219798  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.222773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223300  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.223337  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223511  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.223750  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.223975  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.224156  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.224364  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.224610  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.224625  152982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-839656 && echo "old-k8s-version-839656" | sudo tee /etc/hostname
	I0826 12:09:51.340951  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-839656
	
	I0826 12:09:51.340995  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.343773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344119  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.344144  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344312  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.344531  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344731  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344865  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.345037  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.345207  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.345224  152982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-839656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-839656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-839656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:51.456135  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:51.456180  152982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:51.456233  152982 buildroot.go:174] setting up certificates
	I0826 12:09:51.456247  152982 provision.go:84] configureAuth start
	I0826 12:09:51.456263  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.456585  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.459426  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.459852  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.459895  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.460083  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.462404  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462754  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.462788  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462984  152982 provision.go:143] copyHostCerts
	I0826 12:09:51.463042  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:51.463061  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:51.463118  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:51.463225  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:51.463235  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:51.463255  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:51.463306  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:51.463313  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:51.463331  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:51.463381  152982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-839656 san=[127.0.0.1 192.168.72.136 localhost minikube old-k8s-version-839656]
	I0826 12:09:51.533462  152982 provision.go:177] copyRemoteCerts
	I0826 12:09:51.533528  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:51.533556  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.536586  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.536967  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.536991  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.537268  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.537519  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.537729  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.537894  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:51.617503  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:51.642966  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0826 12:09:51.669120  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:51.693595  152982 provision.go:87] duration metric: took 237.331736ms to configureAuth
	I0826 12:09:51.693629  152982 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:51.693808  152982 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:09:51.693895  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.697161  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697508  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.697553  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697789  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.698042  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698207  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698394  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.698565  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.698798  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.698819  152982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:52.187972  153366 start.go:364] duration metric: took 2m56.271360342s to acquireMachinesLock for "default-k8s-diff-port-697869"
	I0826 12:09:52.188045  153366 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:52.188053  153366 fix.go:54] fixHost starting: 
	I0826 12:09:52.188497  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:52.188541  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:52.209451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0826 12:09:52.209960  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:52.210572  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:09:52.210591  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:52.211008  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:52.211232  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:09:52.211382  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:09:52.213165  153366 fix.go:112] recreateIfNeeded on default-k8s-diff-port-697869: state=Stopped err=<nil>
	I0826 12:09:52.213198  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	W0826 12:09:52.213359  153366 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:52.215535  153366 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-697869" ...
	I0826 12:09:49.662002  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.663287  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.959544  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:51.959580  152982 machine.go:96] duration metric: took 846.74482ms to provisionDockerMachine
	I0826 12:09:51.959595  152982 start.go:293] postStartSetup for "old-k8s-version-839656" (driver="kvm2")
	I0826 12:09:51.959606  152982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:51.959628  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.959989  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:51.960024  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.962912  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963278  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.963304  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963520  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.963756  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.963954  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.964082  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.046059  152982 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:52.050013  152982 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:52.050045  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:52.050119  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:52.050225  152982 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:52.050345  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:52.059871  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:52.082494  152982 start.go:296] duration metric: took 122.880191ms for postStartSetup
	I0826 12:09:52.082546  152982 fix.go:56] duration metric: took 19.890844987s for fixHost
	I0826 12:09:52.082576  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.085291  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085726  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.085772  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085898  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.086116  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086307  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086457  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.086659  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:52.086841  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:52.086856  152982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:52.187806  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674192.159623045
	
	I0826 12:09:52.187839  152982 fix.go:216] guest clock: 1724674192.159623045
	I0826 12:09:52.187846  152982 fix.go:229] Guest: 2024-08-26 12:09:52.159623045 +0000 UTC Remote: 2024-08-26 12:09:52.082552402 +0000 UTC m=+250.413281630 (delta=77.070643ms)
	I0826 12:09:52.187868  152982 fix.go:200] guest clock delta is within tolerance: 77.070643ms
	I0826 12:09:52.187873  152982 start.go:83] releasing machines lock for "old-k8s-version-839656", held for 19.996211523s
	I0826 12:09:52.187905  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.188210  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:52.191003  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191480  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.191511  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191670  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192375  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192595  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192733  152982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:52.192794  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.192854  152982 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:52.192883  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.195598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195757  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195965  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.195994  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196172  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196256  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.196290  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196424  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196463  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196624  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196627  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196812  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196842  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.196954  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.304741  152982 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:52.311072  152982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:52.457568  152982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:52.465123  152982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:52.465211  152982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:52.487320  152982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:52.487351  152982 start.go:495] detecting cgroup driver to use...
	I0826 12:09:52.487459  152982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:52.509680  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:52.526517  152982 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:52.526615  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:52.540741  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:52.554819  152982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:52.677611  152982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:52.829605  152982 docker.go:233] disabling docker service ...
	I0826 12:09:52.829706  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:52.844862  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:52.859869  152982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:53.021968  152982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:53.156768  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:53.173028  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:53.194573  152982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 12:09:53.194641  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.204783  152982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:53.204873  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.215395  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.225578  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.235810  152982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:53.246635  152982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:53.257399  152982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:53.257467  152982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:53.273553  152982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:53.283339  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:53.432394  152982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:53.583340  152982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:53.583443  152982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:53.590729  152982 start.go:563] Will wait 60s for crictl version
	I0826 12:09:53.590877  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:53.596292  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:53.656413  152982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:53.656523  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.685569  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.716571  152982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 12:09:52.217358  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Start
	I0826 12:09:52.217561  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring networks are active...
	I0826 12:09:52.218523  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network default is active
	I0826 12:09:52.218930  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network mk-default-k8s-diff-port-697869 is active
	I0826 12:09:52.219443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Getting domain xml...
	I0826 12:09:52.220240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Creating domain...
	I0826 12:09:53.637205  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting to get IP...
	I0826 12:09:53.638259  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638719  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638757  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.638648  154153 retry.go:31] will retry after 309.073725ms: waiting for machine to come up
	I0826 12:09:53.949323  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.949986  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.950021  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.949941  154153 retry.go:31] will retry after 389.554302ms: waiting for machine to come up
	I0826 12:09:54.341836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342416  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.342359  154153 retry.go:31] will retry after 314.065385ms: waiting for machine to come up
	I0826 12:09:54.657763  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658394  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658425  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.658336  154153 retry.go:31] will retry after 564.24487ms: waiting for machine to come up
	I0826 12:09:55.224230  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224610  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.224578  154153 retry.go:31] will retry after 685.123739ms: waiting for machine to come up
	I0826 12:09:53.718104  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:53.721461  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.721900  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:53.721938  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.722137  152982 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:53.726404  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:53.738999  152982 kubeadm.go:883] updating cluster {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:53.739130  152982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 12:09:53.739182  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:53.791456  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:53.791561  152982 ssh_runner.go:195] Run: which lz4
	I0826 12:09:53.795624  152982 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:53.799857  152982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:53.799892  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 12:09:55.402637  152982 crio.go:462] duration metric: took 1.607044522s to copy over tarball
	I0826 12:09:55.402746  152982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:54.163063  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.662394  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.662428  152550 pod_ready.go:82] duration metric: took 10.008136426s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.662445  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668522  152550 pod_ready.go:93] pod "kube-proxy-wfl6s" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.668557  152550 pod_ready.go:82] duration metric: took 6.10318ms for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668571  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:57.675036  152550 pod_ready.go:103] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.911914  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912484  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.912420  154153 retry.go:31] will retry after 578.675355ms: waiting for machine to come up
	I0826 12:09:56.493183  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493668  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:56.493552  154153 retry.go:31] will retry after 793.710444ms: waiting for machine to come up
	I0826 12:09:57.289554  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290128  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290160  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:57.290070  154153 retry.go:31] will retry after 1.099676217s: waiting for machine to come up
	I0826 12:09:58.391500  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392029  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392060  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:58.391966  154153 retry.go:31] will retry after 1.753296062s: waiting for machine to come up
	I0826 12:10:00.148179  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148759  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148795  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:00.148689  154153 retry.go:31] will retry after 1.591840738s: waiting for machine to come up
	I0826 12:09:58.462705  152982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059925579s)
	I0826 12:09:58.462738  152982 crio.go:469] duration metric: took 3.060066141s to extract the tarball
	I0826 12:09:58.462748  152982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:58.504763  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:58.547876  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:58.547908  152982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:09:58.548002  152982 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.548020  152982 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.548047  152982 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.548058  152982 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.548025  152982 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.548107  152982 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.548041  152982 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 12:09:58.548004  152982 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550035  152982 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.550050  152982 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.550064  152982 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.550039  152982 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 12:09:58.550090  152982 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550045  152982 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.550125  152982 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.550071  152982 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.785285  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.798866  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 12:09:58.801333  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.803488  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.845454  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.845683  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.851257  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.875512  152982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 12:09:58.875632  152982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.875702  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.899151  152982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 12:09:58.899204  152982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 12:09:58.899268  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.947547  152982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 12:09:58.947602  152982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.947657  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.960126  152982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 12:09:58.960178  152982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.960229  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.978450  152982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 12:09:58.978504  152982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.978571  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.981296  152982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 12:09:58.981335  152982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.981378  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990296  152982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 12:09:58.990341  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.990351  152982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.990398  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990481  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:58.990549  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.990624  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.993238  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.993297  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.117393  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.117394  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.137340  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.137381  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.137396  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.139282  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.140553  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.237314  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.242110  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.293209  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.293288  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.310442  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.316239  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.316345  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.382180  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.382851  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:59.389447  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 12:09:59.454424  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 12:09:59.484709  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 12:09:59.491496  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 12:09:59.491517  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 12:09:59.491555  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 12:09:59.495411  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 12:09:59.614016  152982 cache_images.go:92] duration metric: took 1.066082637s to LoadCachedImages
	W0826 12:09:59.614118  152982 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0826 12:09:59.614133  152982 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.20.0 crio true true} ...
	I0826 12:09:59.614248  152982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-839656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:59.614345  152982 ssh_runner.go:195] Run: crio config
	I0826 12:09:59.661938  152982 cni.go:84] Creating CNI manager for ""
	I0826 12:09:59.661962  152982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:59.661975  152982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:59.661994  152982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-839656 NodeName:old-k8s-version-839656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 12:09:59.662131  152982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-839656"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:59.662212  152982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 12:09:59.672820  152982 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:59.672907  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:59.682949  152982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0826 12:09:59.701705  152982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:59.719839  152982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0826 12:09:59.737712  152982 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:59.741301  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:59.753857  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:59.877969  152982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:59.896278  152982 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656 for IP: 192.168.72.136
	I0826 12:09:59.896306  152982 certs.go:194] generating shared ca certs ...
	I0826 12:09:59.896337  152982 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:59.896522  152982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:59.896620  152982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:59.896640  152982 certs.go:256] generating profile certs ...
	I0826 12:09:59.896769  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key
	I0826 12:09:59.896903  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261
	I0826 12:09:59.896972  152982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key
	I0826 12:09:59.897126  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:59.897165  152982 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:59.897178  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:59.897216  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:59.897261  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:59.897303  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:59.897362  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:59.898051  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:59.938407  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:59.983455  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:00.021803  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:00.058157  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 12:10:00.095920  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:00.133185  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:00.167537  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:00.193940  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:00.220558  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:00.245567  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:00.274758  152982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:00.296741  152982 ssh_runner.go:195] Run: openssl version
	I0826 12:10:00.305185  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:00.321395  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326339  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326422  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.332789  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:00.343971  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:00.355979  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360900  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360985  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.367085  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:00.379942  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:00.391907  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396769  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396845  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.403009  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:00.416262  152982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:00.422105  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:00.428526  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:00.435241  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:00.441902  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:00.448502  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:00.455012  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:00.461390  152982 kubeadm.go:392] StartCluster: {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:00.461533  152982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:00.461596  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.503939  152982 cri.go:89] found id: ""
	I0826 12:10:00.504026  152982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:00.515410  152982 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:00.515434  152982 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:00.515483  152982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:00.527240  152982 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:00.528558  152982 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:10:00.529540  152982 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-839656" cluster setting kubeconfig missing "old-k8s-version-839656" context setting]
	I0826 12:10:00.530977  152982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:00.618477  152982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:00.630233  152982 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
	I0826 12:10:00.630283  152982 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:00.630300  152982 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:00.630367  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.667438  152982 cri.go:89] found id: ""
	I0826 12:10:00.667535  152982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:00.685319  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:00.695968  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:00.696003  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:00.696087  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:00.706519  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:00.706583  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:00.716807  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:00.726555  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:00.726637  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:00.737356  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.747702  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:00.747773  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.758771  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:00.769257  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:00.769345  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:00.780102  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:00.791976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:00.922432  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:58.196998  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:58.197024  152550 pod_ready.go:82] duration metric: took 2.528445128s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:58.197035  152550 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:00.486854  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:02.704500  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:01.741774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742399  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:01.742299  154153 retry.go:31] will retry after 2.754846919s: waiting for machine to come up
	I0826 12:10:04.499575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499918  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499950  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:04.499866  154153 retry.go:31] will retry after 2.260097113s: waiting for machine to come up
	I0826 12:10:02.146027  152982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223548629s)
	I0826 12:10:02.146087  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.407469  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.511616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.629123  152982 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:02.629250  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.129448  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.629685  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.129759  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.629807  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.129526  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.629782  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.129949  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.630031  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.203846  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:07.703046  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:06.761311  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761805  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:06.761731  154153 retry.go:31] will retry after 3.424580644s: waiting for machine to come up
	I0826 12:10:10.188178  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188746  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has current primary IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Found IP for machine: 192.168.61.11
	I0826 12:10:10.188789  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserving static IP address...
	I0826 12:10:10.189233  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.189270  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | skip adding static IP to network mk-default-k8s-diff-port-697869 - found existing host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"}
	I0826 12:10:10.189292  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserved static IP address: 192.168.61.11
	I0826 12:10:10.189312  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for SSH to be available...
	I0826 12:10:10.189327  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Getting to WaitForSSH function...
	I0826 12:10:10.191775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192162  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.192192  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192272  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH client type: external
	I0826 12:10:10.192300  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa (-rw-------)
	I0826 12:10:10.192332  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:10.192351  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | About to run SSH command:
	I0826 12:10:10.192364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | exit 0
	I0826 12:10:10.315078  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:10.315506  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetConfigRaw
	I0826 12:10:10.316191  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.318850  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319207  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.319235  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319491  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:10:10.319715  153366 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:10.319736  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:10.320045  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.322352  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322660  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.322682  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322852  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.323067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323216  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323371  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.323524  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.323732  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.323745  153366 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:10.427284  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:10.427314  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427630  153366 buildroot.go:166] provisioning hostname "default-k8s-diff-port-697869"
	I0826 12:10:10.427661  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.430485  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.430865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.430894  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.431065  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.431240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431388  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431507  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.431631  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.431804  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.431818  153366 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-697869 && echo "default-k8s-diff-port-697869" | sudo tee /etc/hostname
	I0826 12:10:10.544414  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-697869
	
	I0826 12:10:10.544455  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.547901  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548333  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.548375  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548612  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.548835  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549074  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549250  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.549458  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.549632  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.549648  153366 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-697869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-697869/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-697869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:10.659809  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:10.659858  153366 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:10.659937  153366 buildroot.go:174] setting up certificates
	I0826 12:10:10.659957  153366 provision.go:84] configureAuth start
	I0826 12:10:10.659978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.660304  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.663231  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.663628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663798  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.666261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666603  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.666630  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666827  153366 provision.go:143] copyHostCerts
	I0826 12:10:10.666912  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:10.666937  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:10.667005  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:10.667125  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:10.667137  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:10.667164  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:10.667239  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:10.667249  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:10.667273  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:10.667344  153366 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-697869 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-697869 localhost minikube]
	I0826 12:10:11.491531  152463 start.go:364] duration metric: took 54.190046907s to acquireMachinesLock for "no-preload-956479"
	I0826 12:10:11.491592  152463 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:10:11.491601  152463 fix.go:54] fixHost starting: 
	I0826 12:10:11.492032  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:10:11.492066  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:10:11.509260  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
	I0826 12:10:11.509870  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:10:11.510401  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:10:11.510433  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:10:11.510772  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:10:11.510983  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:11.511151  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:10:11.513024  152463 fix.go:112] recreateIfNeeded on no-preload-956479: state=Stopped err=<nil>
	I0826 12:10:11.513048  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	W0826 12:10:11.513218  152463 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:10:11.515241  152463 out.go:177] * Restarting existing kvm2 VM for "no-preload-956479" ...
	I0826 12:10:07.129729  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:07.629445  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.129308  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.629701  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.130082  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.629958  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.129963  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.629747  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.130061  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.630060  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.703400  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:11.703487  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:10.808804  153366 provision.go:177] copyRemoteCerts
	I0826 12:10:10.808865  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:10.808893  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.811758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812215  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.812251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812451  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.812664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.812817  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.813020  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:10.905741  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:10.931863  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0826 12:10:10.958232  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:10.983737  153366 provision.go:87] duration metric: took 323.761817ms to configureAuth
	I0826 12:10:10.983774  153366 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:10.983992  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:10.984092  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.986976  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987357  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.987386  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.987842  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.987978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.988105  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.988276  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.988443  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.988459  153366 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:11.257812  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:11.257846  153366 machine.go:96] duration metric: took 938.116965ms to provisionDockerMachine
	I0826 12:10:11.257861  153366 start.go:293] postStartSetup for "default-k8s-diff-port-697869" (driver="kvm2")
	I0826 12:10:11.257872  153366 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:11.257889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.258214  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:11.258246  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.261404  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261680  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.261702  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261886  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.262067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.262214  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.262386  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.345667  153366 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:11.349967  153366 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:11.350004  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:11.350084  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:11.350186  153366 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:11.350308  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:11.361671  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:11.386178  153366 start.go:296] duration metric: took 128.298803ms for postStartSetup
	I0826 12:10:11.386233  153366 fix.go:56] duration metric: took 19.198180603s for fixHost
	I0826 12:10:11.386258  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.389263  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389579  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.389606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389838  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.390034  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390172  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390287  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.390479  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:11.390666  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:11.390678  153366 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:11.491363  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674211.462689704
	
	I0826 12:10:11.491389  153366 fix.go:216] guest clock: 1724674211.462689704
	I0826 12:10:11.491401  153366 fix.go:229] Guest: 2024-08-26 12:10:11.462689704 +0000 UTC Remote: 2024-08-26 12:10:11.386238136 +0000 UTC m=+195.618286719 (delta=76.451568ms)
	I0826 12:10:11.491428  153366 fix.go:200] guest clock delta is within tolerance: 76.451568ms
	I0826 12:10:11.491433  153366 start.go:83] releasing machines lock for "default-k8s-diff-port-697869", held for 19.303413047s
	I0826 12:10:11.491459  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.491760  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:11.494596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495094  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.495124  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495330  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.495889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496208  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496333  153366 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:11.496390  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.496433  153366 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:11.496456  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.499087  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499442  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499469  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499705  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499728  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499751  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.499964  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500007  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.500134  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500164  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500359  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500349  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.500509  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.612518  153366 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:11.618693  153366 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:11.766025  153366 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:11.772405  153366 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:11.772476  153366 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:11.790401  153366 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:11.790433  153366 start.go:495] detecting cgroup driver to use...
	I0826 12:10:11.790505  153366 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:11.806946  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:11.822137  153366 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:11.822199  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:11.836496  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:11.851090  153366 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:11.963366  153366 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:12.113326  153366 docker.go:233] disabling docker service ...
	I0826 12:10:12.113402  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:12.131489  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:12.148801  153366 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:12.293074  153366 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:12.420202  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:12.435061  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:12.455192  153366 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:12.455268  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.467004  153366 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:12.467079  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.477903  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.488979  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.500322  153366 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:12.513490  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.525746  153366 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.544944  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.556159  153366 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:12.566333  153366 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:12.566420  153366 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:12.584702  153366 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:12.596221  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:12.740368  153366 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:12.882412  153366 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:12.882501  153366 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:12.888373  153366 start.go:563] Will wait 60s for crictl version
	I0826 12:10:12.888446  153366 ssh_runner.go:195] Run: which crictl
	I0826 12:10:12.892415  153366 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:12.930486  153366 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:12.930577  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.959322  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.997340  153366 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:11.516801  152463 main.go:141] libmachine: (no-preload-956479) Calling .Start
	I0826 12:10:11.517026  152463 main.go:141] libmachine: (no-preload-956479) Ensuring networks are active...
	I0826 12:10:11.517932  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network default is active
	I0826 12:10:11.518378  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network mk-no-preload-956479 is active
	I0826 12:10:11.518950  152463 main.go:141] libmachine: (no-preload-956479) Getting domain xml...
	I0826 12:10:11.519889  152463 main.go:141] libmachine: (no-preload-956479) Creating domain...
	I0826 12:10:12.859267  152463 main.go:141] libmachine: (no-preload-956479) Waiting to get IP...
	I0826 12:10:12.860407  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:12.860889  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:12.860933  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:12.860840  154342 retry.go:31] will retry after 295.429691ms: waiting for machine to come up
	I0826 12:10:13.158650  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.159259  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.159290  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.159207  154342 retry.go:31] will retry after 385.646499ms: waiting for machine to come up
	I0826 12:10:13.547162  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.547722  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.547754  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.547631  154342 retry.go:31] will retry after 390.965905ms: waiting for machine to come up
	I0826 12:10:13.940240  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.940777  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.940820  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.940714  154342 retry.go:31] will retry after 457.995779ms: waiting for machine to come up
	I0826 12:10:14.400465  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:14.400981  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:14.401016  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:14.400917  154342 retry.go:31] will retry after 697.078299ms: waiting for machine to come up
	I0826 12:10:12.998786  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:13.001919  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002340  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:13.002376  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002627  153366 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:13.007888  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:13.023470  153366 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:13.023599  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:13.023666  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:13.060321  153366 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:13.060405  153366 ssh_runner.go:195] Run: which lz4
	I0826 12:10:13.064638  153366 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:10:13.069089  153366 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:10:13.069126  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:10:14.437617  153366 crio.go:462] duration metric: took 1.373030307s to copy over tarball
	I0826 12:10:14.437710  153366 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:10:12.129652  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:12.630076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.129342  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.630081  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.130129  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.629381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.129909  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.630114  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.129784  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.629463  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.704867  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:16.204819  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:15.099404  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:15.100002  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:15.100035  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:15.099956  154342 retry.go:31] will retry after 947.348263ms: waiting for machine to come up
	I0826 12:10:16.048628  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:16.049166  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:16.049185  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:16.049113  154342 retry.go:31] will retry after 1.169467339s: waiting for machine to come up
	I0826 12:10:17.219998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:17.220564  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:17.220589  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:17.220541  154342 retry.go:31] will retry after 945.873541ms: waiting for machine to come up
	I0826 12:10:18.167823  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:18.168351  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:18.168377  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:18.168272  154342 retry.go:31] will retry after 1.495556294s: waiting for machine to come up
	I0826 12:10:19.666032  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:19.666629  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:19.666656  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:19.666551  154342 retry.go:31] will retry after 1.710448725s: waiting for machine to come up
	I0826 12:10:16.739676  153366 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301910814s)
	I0826 12:10:16.739718  153366 crio.go:469] duration metric: took 2.302064986s to extract the tarball
	I0826 12:10:16.739729  153366 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:10:16.777127  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:16.820340  153366 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:10:16.820367  153366 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:10:16.820376  153366 kubeadm.go:934] updating node { 192.168.61.11 8444 v1.31.0 crio true true} ...
	I0826 12:10:16.820500  153366 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-697869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:16.820619  153366 ssh_runner.go:195] Run: crio config
	I0826 12:10:16.868670  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:16.868694  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:16.868708  153366 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:16.868738  153366 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-697869 NodeName:default-k8s-diff-port-697869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:16.868915  153366 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-697869"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:16.869010  153366 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:16.883092  153366 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:16.883230  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:16.893951  153366 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0826 12:10:16.911836  153366 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:16.928582  153366 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0826 12:10:16.945593  153366 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:16.949432  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:16.961659  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:17.085246  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:17.103244  153366 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869 for IP: 192.168.61.11
	I0826 12:10:17.103271  153366 certs.go:194] generating shared ca certs ...
	I0826 12:10:17.103302  153366 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:17.103510  153366 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:17.103575  153366 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:17.103585  153366 certs.go:256] generating profile certs ...
	I0826 12:10:17.103700  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/client.key
	I0826 12:10:17.103787  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key.bfd30dfa
	I0826 12:10:17.103839  153366 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key
	I0826 12:10:17.103989  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:17.104033  153366 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:17.104045  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:17.104088  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:17.104138  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:17.104169  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:17.104226  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:17.105131  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:17.133445  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:17.170369  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:17.203828  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:17.239736  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0826 12:10:17.270804  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:10:17.311143  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:17.337241  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:10:17.361255  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:17.389089  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:17.415203  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:17.440069  153366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:17.457711  153366 ssh_runner.go:195] Run: openssl version
	I0826 12:10:17.463825  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:17.475007  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479590  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479674  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.485682  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:17.496820  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:17.507770  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512284  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512360  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.518185  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:17.530028  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:17.541213  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546412  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546492  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.552969  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:17.565000  153366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:17.570123  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:17.576431  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:17.582447  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:17.588686  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:17.595338  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:17.601487  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:17.607923  153366 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:17.608035  153366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:17.608125  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.647040  153366 cri.go:89] found id: ""
	I0826 12:10:17.647140  153366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:17.657597  153366 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:17.657623  153366 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:17.657696  153366 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:17.667949  153366 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:17.669056  153366 kubeconfig.go:125] found "default-k8s-diff-port-697869" server: "https://192.168.61.11:8444"
	I0826 12:10:17.671281  153366 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:17.680798  153366 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I0826 12:10:17.680847  153366 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:17.680862  153366 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:17.680921  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.718772  153366 cri.go:89] found id: ""
	I0826 12:10:17.718890  153366 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:17.737115  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:17.747272  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:17.747300  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:17.747365  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:10:17.757172  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:17.757253  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:17.767325  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:10:17.779947  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:17.780022  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:17.789867  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.799532  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:17.799614  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.812714  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:10:17.825162  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:17.825246  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:17.838058  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:17.855348  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:17.976993  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:18.821196  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.025876  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.104571  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.198607  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:19.198729  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.698978  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.198987  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.246044  153366 api_server.go:72] duration metric: took 1.047451922s to wait for apiserver process to appear ...
	I0826 12:10:20.246077  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:20.246102  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:20.246682  153366 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0826 12:10:20.747149  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:17.129856  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:17.629845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.129411  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.629780  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.129980  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.629521  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.129719  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.630349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.130078  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.629658  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.704382  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:20.705290  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:22.705625  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:21.379594  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:21.380141  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:21.380174  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:21.380054  154342 retry.go:31] will retry after 2.588125482s: waiting for machine to come up
	I0826 12:10:23.969901  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:23.970463  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:23.970492  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:23.970429  154342 retry.go:31] will retry after 2.959609618s: waiting for machine to come up
	I0826 12:10:22.736733  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.736773  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.736792  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.767927  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.767978  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.767998  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.815605  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:22.815647  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.247226  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.265036  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.265070  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.746536  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.761050  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.761087  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.246584  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.256796  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.256832  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.746370  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.751618  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.751659  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.246161  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.250242  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.250271  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.746903  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.751494  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.751522  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:26.246579  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:26.251290  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:10:26.257484  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:26.257519  153366 api_server.go:131] duration metric: took 6.01143401s to wait for apiserver health ...
	I0826 12:10:26.257529  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:26.257536  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:26.259498  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:22.130431  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:22.630197  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.129672  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.630044  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.129562  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.629554  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.129334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.630351  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.130136  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.629461  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.203975  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.704731  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:26.932057  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:26.932632  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:26.932665  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:26.932547  154342 retry.go:31] will retry after 3.538498107s: waiting for machine to come up
	I0826 12:10:26.260852  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:26.271312  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:26.290104  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:26.299800  153366 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:26.299843  153366 system_pods.go:61] "coredns-6f6b679f8f-d5f9l" [7761358c-70cb-40e1-98c2-322335e33053] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:26.299852  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [877bd1a3-67e5-4208-96f7-242f6a6edd76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:26.299858  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [e2d33714-bff0-480b-9619-ed28f0fbbbe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:26.299868  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [f858c23a-d87e-4f1e-bffa-0bdd8ded996f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:26.299872  153366 system_pods.go:61] "kube-proxy-lvsx9" [12112756-81ed-415f-9033-cb9effdd20ee] Running
	I0826 12:10:26.299880  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [d8991013-f5ee-4df3-b48a-d6546417999a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:26.299885  153366 system_pods.go:61] "metrics-server-6867b74b74-spxx8" [1d5d9b1e-05f3-4b59-98a8-8d8f127be3c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:26.299889  153366 system_pods.go:61] "storage-provisioner" [ac2ac441-92f0-467a-a0da-fe4b8e4da50c] Running
	I0826 12:10:26.299896  153366 system_pods.go:74] duration metric: took 9.758032ms to wait for pod list to return data ...
	I0826 12:10:26.299903  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:26.303810  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:26.303848  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:26.303865  153366 node_conditions.go:105] duration metric: took 3.956287ms to run NodePressure ...
	I0826 12:10:26.303888  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:26.568053  153366 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573755  153366 kubeadm.go:739] kubelet initialised
	I0826 12:10:26.573793  153366 kubeadm.go:740] duration metric: took 5.692563ms waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573810  153366 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:26.580178  153366 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:28.585940  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.587027  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.129634  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:27.629356  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.130029  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.629993  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.130030  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.629424  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.129476  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.630209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.129435  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.630170  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.203373  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:32.204503  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.474603  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475145  152463 main.go:141] libmachine: (no-preload-956479) Found IP for machine: 192.168.50.213
	I0826 12:10:30.475172  152463 main.go:141] libmachine: (no-preload-956479) Reserving static IP address...
	I0826 12:10:30.475184  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has current primary IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475655  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.475688  152463 main.go:141] libmachine: (no-preload-956479) DBG | skip adding static IP to network mk-no-preload-956479 - found existing host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"}
	I0826 12:10:30.475705  152463 main.go:141] libmachine: (no-preload-956479) Reserved static IP address: 192.168.50.213
	I0826 12:10:30.475724  152463 main.go:141] libmachine: (no-preload-956479) Waiting for SSH to be available...
	I0826 12:10:30.475749  152463 main.go:141] libmachine: (no-preload-956479) DBG | Getting to WaitForSSH function...
	I0826 12:10:30.477762  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478222  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.478256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478323  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH client type: external
	I0826 12:10:30.478352  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa (-rw-------)
	I0826 12:10:30.478400  152463 main.go:141] libmachine: (no-preload-956479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:30.478423  152463 main.go:141] libmachine: (no-preload-956479) DBG | About to run SSH command:
	I0826 12:10:30.478431  152463 main.go:141] libmachine: (no-preload-956479) DBG | exit 0
	I0826 12:10:30.607143  152463 main.go:141] libmachine: (no-preload-956479) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:30.607526  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetConfigRaw
	I0826 12:10:30.608312  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.611028  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611425  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.611461  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611664  152463 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:10:30.611888  152463 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:30.611920  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:30.612166  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.614651  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615221  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.615253  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615430  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.615623  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615802  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615987  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.616182  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.616357  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.616367  152463 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:30.719178  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:30.719220  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719544  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:10:30.719577  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719829  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.722665  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723083  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.723112  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723299  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.723479  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723805  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.723965  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.724136  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.724154  152463 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956479 && echo "no-preload-956479" | sudo tee /etc/hostname
	I0826 12:10:30.844510  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956479
	
	I0826 12:10:30.844551  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.848147  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848601  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.848636  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848846  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.849053  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849234  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849371  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.849537  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.849711  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.849726  152463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956479/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:30.963743  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:30.963781  152463 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:30.963831  152463 buildroot.go:174] setting up certificates
	I0826 12:10:30.963844  152463 provision.go:84] configureAuth start
	I0826 12:10:30.963858  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.964223  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.967426  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.967922  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.967947  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.968210  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.970910  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971231  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.971268  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971381  152463 provision.go:143] copyHostCerts
	I0826 12:10:30.971439  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:30.971462  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:30.971515  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:30.971610  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:30.971620  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:30.971641  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:30.971695  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:30.971708  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:30.971726  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:30.971773  152463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.no-preload-956479 san=[127.0.0.1 192.168.50.213 localhost minikube no-preload-956479]
	I0826 12:10:31.209813  152463 provision.go:177] copyRemoteCerts
	I0826 12:10:31.209904  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:31.209939  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.213380  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.213880  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.213921  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.214161  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.214387  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.214543  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.214669  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.304972  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:31.332069  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:10:31.359526  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:31.387988  152463 provision.go:87] duration metric: took 424.128041ms to configureAuth
	I0826 12:10:31.388025  152463 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:31.388248  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:31.388342  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.392126  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392495  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.392527  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.393069  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393276  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393443  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.393636  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.393812  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.393830  152463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:31.673101  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:31.673134  152463 machine.go:96] duration metric: took 1.061231135s to provisionDockerMachine
	I0826 12:10:31.673147  152463 start.go:293] postStartSetup for "no-preload-956479" (driver="kvm2")
	I0826 12:10:31.673157  152463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:31.673173  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.673523  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:31.673556  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.676692  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677097  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.677142  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677349  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.677558  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.677702  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.677822  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.757940  152463 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:31.762636  152463 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:31.762668  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:31.762755  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:31.762887  152463 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:31.763005  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:31.773596  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:31.805712  152463 start.go:296] duration metric: took 132.547938ms for postStartSetup
	I0826 12:10:31.805772  152463 fix.go:56] duration metric: took 20.314170869s for fixHost
	I0826 12:10:31.805799  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.809143  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809503  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.809539  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.810034  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810552  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.810714  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.810950  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.810964  152463 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:31.919562  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674231.878777554
	
	I0826 12:10:31.919593  152463 fix.go:216] guest clock: 1724674231.878777554
	I0826 12:10:31.919605  152463 fix.go:229] Guest: 2024-08-26 12:10:31.878777554 +0000 UTC Remote: 2024-08-26 12:10:31.805776925 +0000 UTC m=+357.093278934 (delta=73.000629ms)
	I0826 12:10:31.919635  152463 fix.go:200] guest clock delta is within tolerance: 73.000629ms
	I0826 12:10:31.919653  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 20.428086051s
	I0826 12:10:31.919683  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.919994  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:31.922926  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923273  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.923305  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923492  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924019  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924217  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924314  152463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:31.924361  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.924462  152463 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:31.924485  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.927256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927510  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927697  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927724  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927869  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.927977  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.928076  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928245  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.928265  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928507  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.928547  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928816  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:32.013240  152463 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:32.047898  152463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:32.200554  152463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:32.207077  152463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:32.207149  152463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:32.223842  152463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:32.223869  152463 start.go:495] detecting cgroup driver to use...
	I0826 12:10:32.223931  152463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:32.241232  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:32.256522  152463 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:32.256594  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:32.271203  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:32.286062  152463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:32.422959  152463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:32.596450  152463 docker.go:233] disabling docker service ...
	I0826 12:10:32.596518  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:32.610684  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:32.624456  152463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:32.754300  152463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:32.880447  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:32.895761  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:32.915507  152463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:32.915579  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.926244  152463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:32.926323  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.936322  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.947292  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.958349  152463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:32.969332  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.981643  152463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.003757  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.014520  152463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:33.024134  152463 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:33.024220  152463 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:33.036667  152463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:33.046675  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:33.166681  152463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:33.314047  152463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:33.314136  152463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:33.319922  152463 start.go:563] Will wait 60s for crictl version
	I0826 12:10:33.320002  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.323747  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:33.363172  152463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:33.363268  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.391607  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.422180  152463 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:33.423515  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:33.426749  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427279  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:33.427316  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427559  152463 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:33.431826  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:33.443984  152463 kubeadm.go:883] updating cluster {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:33.444119  152463 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:33.444160  152463 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:33.478886  152463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:33.478919  152463 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:10:33.478977  152463 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.478997  152463 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.479029  152463 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.479079  152463 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 12:10:33.479002  152463 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.479095  152463 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.479153  152463 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.479157  152463 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480618  152463 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.480616  152463 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.480650  152463 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.480654  152463 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480623  152463 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.480628  152463 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.480629  152463 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.480763  152463 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0826 12:10:33.713473  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0826 12:10:33.725267  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.737490  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.787737  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.801836  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.807734  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.873480  152463 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0826 12:10:33.873546  152463 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.873617  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.873493  152463 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0826 12:10:33.873741  152463 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.873772  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.889641  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.921098  152463 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0826 12:10:33.921226  152463 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.921326  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.921170  152463 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0826 12:10:33.921463  152463 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.921499  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.930650  152463 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0826 12:10:33.930702  152463 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.930720  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.930738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.930743  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.973954  152463 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0826 12:10:33.974005  152463 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.974042  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.974059  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.974053  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:34.013541  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.013571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.013542  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.053966  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.053985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.068414  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.116750  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.116778  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.164943  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.172957  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.204571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.230985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.236650  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.270826  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0826 12:10:34.270990  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.304050  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0826 12:10:34.304147  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:34.308251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0826 12:10:34.308374  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:34.335314  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.348389  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.351251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0826 12:10:34.351376  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:34.359812  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0826 12:10:34.359842  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0826 12:10:34.359863  152463 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.359891  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0826 12:10:34.359921  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0826 12:10:34.359948  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:34.359952  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.400500  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0826 12:10:34.400644  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:34.428715  152463 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0826 12:10:34.428758  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0826 12:10:34.428776  152463 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.428802  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0826 12:10:34.428855  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:31.586509  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:31.586539  153366 pod_ready.go:82] duration metric: took 5.006322441s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:31.586549  153366 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:33.593060  153366 pod_ready.go:103] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:34.092728  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:34.092762  153366 pod_ready.go:82] duration metric: took 2.506204888s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:34.092775  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:32.130190  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:32.630331  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.129323  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.629368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.129667  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.629421  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.130330  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.630142  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.130340  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.629400  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.205203  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.704302  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.449383  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.089320181s)
	I0826 12:10:36.449436  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0826 12:10:36.449447  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.048765538s)
	I0826 12:10:36.449467  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449481  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0826 12:10:36.449509  152463 ssh_runner.go:235] Completed: which crictl: (2.020634497s)
	I0826 12:10:36.449536  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449568  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.427527  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.977941403s)
	I0826 12:10:38.427585  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0826 12:10:38.427610  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427529  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.977935335s)
	I0826 12:10:38.427668  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.466259  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:36.100135  153366 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.100269  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.100296  153366 pod_ready.go:82] duration metric: took 3.007513255s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.100308  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105634  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.105658  153366 pod_ready.go:82] duration metric: took 5.341415ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105668  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110911  153366 pod_ready.go:93] pod "kube-proxy-lvsx9" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.110939  153366 pod_ready.go:82] duration metric: took 5.263436ms for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110950  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115725  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.115752  153366 pod_ready.go:82] duration metric: took 4.79279ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115765  153366 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:39.122469  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.130309  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:37.629548  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.129413  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.629384  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.130354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.629474  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.129901  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.629362  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.129862  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.629811  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.704541  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.704598  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.705026  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.616557  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.188857601s)
	I0826 12:10:40.616588  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0826 12:10:40.616614  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616634  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.150337121s)
	I0826 12:10:40.616669  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616769  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0826 12:10:40.616885  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:42.472543  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.855842642s)
	I0826 12:10:42.472583  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0826 12:10:42.472586  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.855677168s)
	I0826 12:10:42.472620  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0826 12:10:42.472625  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:42.472702  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:44.419974  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.947236189s)
	I0826 12:10:44.420011  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0826 12:10:44.420041  152463 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:44.420097  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:41.122741  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:43.123416  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:45.623931  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.130334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:42.630068  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.130212  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.629443  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.130067  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.629805  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.129753  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.629806  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.129401  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.630125  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.203266  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.205125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:48.038017  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.617897174s)
	I0826 12:10:48.038048  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0826 12:10:48.038073  152463 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.038114  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.693199  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0826 12:10:48.693251  152463 cache_images.go:123] Successfully loaded all cached images
	I0826 12:10:48.693259  152463 cache_images.go:92] duration metric: took 15.214324574s to LoadCachedImages
	I0826 12:10:48.693274  152463 kubeadm.go:934] updating node { 192.168.50.213 8443 v1.31.0 crio true true} ...
	I0826 12:10:48.693392  152463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:48.693481  152463 ssh_runner.go:195] Run: crio config
	I0826 12:10:48.748151  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:48.748176  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:48.748185  152463 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:48.748210  152463 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956479 NodeName:no-preload-956479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:48.748387  152463 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956479"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:48.748458  152463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:48.759020  152463 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:48.759097  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:48.768345  152463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0826 12:10:48.784233  152463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:48.800236  152463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0826 12:10:48.819243  152463 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:48.823154  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:48.835973  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:48.959506  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:48.977413  152463 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479 for IP: 192.168.50.213
	I0826 12:10:48.977437  152463 certs.go:194] generating shared ca certs ...
	I0826 12:10:48.977458  152463 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:48.977653  152463 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:48.977714  152463 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:48.977725  152463 certs.go:256] generating profile certs ...
	I0826 12:10:48.977827  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.key
	I0826 12:10:48.977903  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key.5be91d7c
	I0826 12:10:48.977952  152463 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key
	I0826 12:10:48.978094  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:48.978136  152463 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:48.978149  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:48.978183  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:48.978221  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:48.978252  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:48.978305  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:48.978975  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:49.029725  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:49.077908  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:49.112813  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:49.157768  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 12:10:49.201804  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:49.228271  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:49.256770  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:49.283073  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:49.316360  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:49.342284  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:49.368126  152463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:49.386334  152463 ssh_runner.go:195] Run: openssl version
	I0826 12:10:49.392457  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:49.404815  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410087  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410160  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.416900  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:49.429893  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:49.442796  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448216  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448291  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.454416  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:49.466241  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:49.477636  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482106  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482193  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.488191  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:49.499538  152463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:49.504332  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:49.510908  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:49.517549  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:49.524925  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:49.531451  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:49.537617  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:49.543680  152463 kubeadm.go:392] StartCluster: {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:49.543776  152463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:49.543843  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.587049  152463 cri.go:89] found id: ""
	I0826 12:10:49.587142  152463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:49.597911  152463 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:49.597936  152463 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:49.598001  152463 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:49.607974  152463 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:49.608976  152463 kubeconfig.go:125] found "no-preload-956479" server: "https://192.168.50.213:8443"
	I0826 12:10:49.611217  152463 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:49.622647  152463 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I0826 12:10:49.622689  152463 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:49.622706  152463 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:49.623002  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.662463  152463 cri.go:89] found id: ""
	I0826 12:10:49.662549  152463 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:49.681134  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:49.691425  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:49.691452  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:49.691512  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:49.701389  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:49.701474  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:49.713195  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:49.722708  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:49.722792  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:49.732905  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:49.742726  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:49.742814  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:48.123021  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:50.123270  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.129441  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:47.629637  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.129381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.630027  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.129789  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.630022  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.130252  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.630145  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.129544  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.629646  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.704947  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:51.705172  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:49.752415  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:49.761573  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:49.761667  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:49.771209  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:49.781057  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:49.889287  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.424782  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.640186  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.713706  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.834409  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:50.834516  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.335630  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.834665  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.851569  152463 api_server.go:72] duration metric: took 1.01717469s to wait for apiserver process to appear ...
	I0826 12:10:51.851601  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:51.851626  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:51.852167  152463 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0826 12:10:52.351709  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.441177  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.441210  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:54.441223  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.451907  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.451937  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:52.623200  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.122552  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:54.852737  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.857641  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:54.857740  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.351825  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.356325  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:55.356364  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.851867  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.858081  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:10:55.865811  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:55.865843  152463 api_server.go:131] duration metric: took 4.014234103s to wait for apiserver health ...
	I0826 12:10:55.865853  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:55.865861  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:55.867818  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:52.129473  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:52.629868  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.129585  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.629893  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.129446  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.629722  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.130173  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.629968  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.129994  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.629422  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.203474  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:56.204271  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.869434  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:55.881376  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:55.935418  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:55.955678  152463 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:55.955721  152463 system_pods.go:61] "coredns-6f6b679f8f-s9685" [b6fca294-8a78-4f7c-a466-11c76362874a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:55.955732  152463 system_pods.go:61] "etcd-no-preload-956479" [96da9402-8ea6-4418-892d-7691ab60a10d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:55.955744  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [5fe3eb03-a50c-4a7b-8c50-37262f1b165f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:55.955752  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [362950c9-4466-413e-8248-053fe4d698a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:55.955759  152463 system_pods.go:61] "kube-proxy-kwpqw" [023fc9f9-538e-43d0-a484-e2f4c75c7f34] Running
	I0826 12:10:55.955769  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [d24580b2-8a37-4aaa-8d9d-66f9eb3e0c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:55.955777  152463 system_pods.go:61] "metrics-server-6867b74b74-ldgsl" [264e96c8-430f-40fc-bb9c-7588cc28bc6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:55.955787  152463 system_pods.go:61] "storage-provisioner" [de97d99d-eda7-4ae4-8051-2fc34a2fe630] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:10:55.955803  152463 system_pods.go:74] duration metric: took 20.359455ms to wait for pod list to return data ...
	I0826 12:10:55.955815  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:55.972694  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:55.972741  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:55.972756  152463 node_conditions.go:105] duration metric: took 16.934705ms to run NodePressure ...
	I0826 12:10:55.972781  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:56.283383  152463 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288149  152463 kubeadm.go:739] kubelet initialised
	I0826 12:10:56.288173  152463 kubeadm.go:740] duration metric: took 4.75919ms waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288183  152463 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:56.292852  152463 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.297832  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297858  152463 pod_ready.go:82] duration metric: took 4.980322ms for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.297868  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297876  152463 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.302936  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302971  152463 pod_ready.go:82] duration metric: took 5.08663ms for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.302987  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302995  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.313684  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313719  152463 pod_ready.go:82] duration metric: took 10.716576ms for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.313733  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313742  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.339570  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339604  152463 pod_ready.go:82] duration metric: took 25.849085ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.339613  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339620  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738759  152463 pod_ready.go:93] pod "kube-proxy-kwpqw" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:56.738786  152463 pod_ready.go:82] duration metric: took 399.156996ms for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738798  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:58.745103  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.623412  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.123226  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.129363  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:57.629878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.129406  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.629611  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.130209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.629354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.130004  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.629599  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.129324  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.629623  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.705336  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:01.206112  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.746646  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.748453  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.623495  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:04.623650  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.129756  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:02.630078  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:02.630168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:02.668634  152982 cri.go:89] found id: ""
	I0826 12:11:02.668665  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.668673  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:02.668680  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:02.668736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:02.707481  152982 cri.go:89] found id: ""
	I0826 12:11:02.707513  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.707524  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:02.707531  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:02.707600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:02.742412  152982 cri.go:89] found id: ""
	I0826 12:11:02.742441  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.742452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:02.742459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:02.742524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:02.783334  152982 cri.go:89] found id: ""
	I0826 12:11:02.783363  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.783374  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:02.783383  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:02.783442  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:02.819550  152982 cri.go:89] found id: ""
	I0826 12:11:02.819578  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.819586  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:02.819592  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:02.819668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:02.857381  152982 cri.go:89] found id: ""
	I0826 12:11:02.857418  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.857429  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:02.857439  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:02.857508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:02.891198  152982 cri.go:89] found id: ""
	I0826 12:11:02.891231  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.891242  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:02.891249  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:02.891328  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:02.925819  152982 cri.go:89] found id: ""
	I0826 12:11:02.925847  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.925856  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:02.925867  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:02.925881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:03.061241  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:03.061287  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:03.061306  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:03.132324  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:03.132364  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:03.176590  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:03.176623  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.229320  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:03.229366  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:05.744686  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:05.758429  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:05.758517  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:05.799162  152982 cri.go:89] found id: ""
	I0826 12:11:05.799200  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.799209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:05.799216  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:05.799270  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:05.839302  152982 cri.go:89] found id: ""
	I0826 12:11:05.839341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.839354  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:05.839362  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:05.839438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:05.900064  152982 cri.go:89] found id: ""
	I0826 12:11:05.900094  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.900102  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:05.900108  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:05.900168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:05.938314  152982 cri.go:89] found id: ""
	I0826 12:11:05.938341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.938350  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:05.938356  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:05.938423  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:05.975644  152982 cri.go:89] found id: ""
	I0826 12:11:05.975679  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.975691  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:05.975699  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:05.975775  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:06.012720  152982 cri.go:89] found id: ""
	I0826 12:11:06.012752  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.012764  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:06.012772  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:06.012848  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:06.048613  152982 cri.go:89] found id: ""
	I0826 12:11:06.048648  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.048656  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:06.048662  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:06.048717  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:06.083136  152982 cri.go:89] found id: ""
	I0826 12:11:06.083171  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.083183  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:06.083195  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:06.083213  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:06.096570  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:06.096616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:06.172561  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:06.172588  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:06.172605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:06.252039  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:06.252081  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:06.291076  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:06.291109  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.705538  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:06.203800  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:05.245839  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.744844  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.745230  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.123518  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.124421  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:08.838693  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:08.853160  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:08.853246  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:08.893024  152982 cri.go:89] found id: ""
	I0826 12:11:08.893058  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.893072  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:08.893083  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:08.893157  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:08.929621  152982 cri.go:89] found id: ""
	I0826 12:11:08.929660  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.929669  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:08.929675  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:08.929744  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:08.965488  152982 cri.go:89] found id: ""
	I0826 12:11:08.965526  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.965541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:08.965550  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:08.965622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:09.001467  152982 cri.go:89] found id: ""
	I0826 12:11:09.001503  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.001515  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:09.001525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:09.001587  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:09.037865  152982 cri.go:89] found id: ""
	I0826 12:11:09.037898  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.037907  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:09.037914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:09.037973  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:09.074537  152982 cri.go:89] found id: ""
	I0826 12:11:09.074571  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.074582  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:09.074591  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:09.074665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:09.111517  152982 cri.go:89] found id: ""
	I0826 12:11:09.111550  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.111561  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:09.111569  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:09.111635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:09.151005  152982 cri.go:89] found id: ""
	I0826 12:11:09.151039  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.151050  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:09.151062  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:09.151079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:09.231625  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:09.231666  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:09.277642  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:09.277685  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:09.326772  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:09.326814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:09.341764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:09.341802  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:09.419087  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:08.203869  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.206872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:12.703516  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.246459  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:11:10.246503  152463 pod_ready.go:82] duration metric: took 13.507695458s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:10.246520  152463 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:12.254439  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:14.752278  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.126604  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:13.622382  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:15.622915  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.920246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:11.933973  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:11.934070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:11.971020  152982 cri.go:89] found id: ""
	I0826 12:11:11.971055  152982 logs.go:276] 0 containers: []
	W0826 12:11:11.971067  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:11.971076  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:11.971147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:12.005639  152982 cri.go:89] found id: ""
	I0826 12:11:12.005669  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.005679  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:12.005687  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:12.005757  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:12.039823  152982 cri.go:89] found id: ""
	I0826 12:11:12.039856  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.039868  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:12.039877  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:12.039954  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:12.075646  152982 cri.go:89] found id: ""
	I0826 12:11:12.075690  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.075702  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:12.075710  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:12.075814  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:12.113810  152982 cri.go:89] found id: ""
	I0826 12:11:12.113838  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.113846  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:12.113852  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:12.113927  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:12.150373  152982 cri.go:89] found id: ""
	I0826 12:11:12.150405  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.150415  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:12.150421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:12.150478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:12.186325  152982 cri.go:89] found id: ""
	I0826 12:11:12.186362  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.186373  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:12.186381  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:12.186444  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:12.221346  152982 cri.go:89] found id: ""
	I0826 12:11:12.221380  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.221392  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:12.221405  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:12.221423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:12.279964  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:12.280006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:12.297102  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:12.297134  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:12.391568  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:12.391593  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:12.391608  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:12.472218  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:12.472259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.012974  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:15.026480  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:15.026553  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:15.060748  152982 cri.go:89] found id: ""
	I0826 12:11:15.060779  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.060787  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:15.060792  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:15.060842  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:15.095611  152982 cri.go:89] found id: ""
	I0826 12:11:15.095644  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.095668  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:15.095683  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:15.095759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:15.130644  152982 cri.go:89] found id: ""
	I0826 12:11:15.130681  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.130692  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:15.130700  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:15.130773  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:15.164343  152982 cri.go:89] found id: ""
	I0826 12:11:15.164375  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.164383  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:15.164391  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:15.164468  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:15.203801  152982 cri.go:89] found id: ""
	I0826 12:11:15.203835  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.203847  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:15.203855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:15.203935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:15.236428  152982 cri.go:89] found id: ""
	I0826 12:11:15.236455  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.236465  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:15.236474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:15.236546  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:15.271307  152982 cri.go:89] found id: ""
	I0826 12:11:15.271345  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.271357  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:15.271365  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:15.271449  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:15.306164  152982 cri.go:89] found id: ""
	I0826 12:11:15.306194  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.306203  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:15.306214  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:15.306228  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:15.319277  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:15.319311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:15.389821  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:15.389853  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:15.389874  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:15.466002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:15.466045  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.506591  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:15.506626  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:14.703938  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.704084  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.753630  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:19.252388  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.124351  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:20.621827  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.061033  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:18.084401  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:18.084478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:18.127327  152982 cri.go:89] found id: ""
	I0826 12:11:18.127360  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.127371  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:18.127380  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:18.127451  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:18.163215  152982 cri.go:89] found id: ""
	I0826 12:11:18.163249  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.163261  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:18.163270  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:18.163330  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:18.198205  152982 cri.go:89] found id: ""
	I0826 12:11:18.198232  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.198241  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:18.198250  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:18.198322  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:18.233245  152982 cri.go:89] found id: ""
	I0826 12:11:18.233279  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.233291  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:18.233299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:18.233366  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:18.266761  152982 cri.go:89] found id: ""
	I0826 12:11:18.266802  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.266825  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:18.266855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:18.266932  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:18.301705  152982 cri.go:89] found id: ""
	I0826 12:11:18.301744  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.301755  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:18.301764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:18.301825  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:18.339384  152982 cri.go:89] found id: ""
	I0826 12:11:18.339413  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.339422  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:18.339428  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:18.339486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:18.374435  152982 cri.go:89] found id: ""
	I0826 12:11:18.374467  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.374475  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:18.374485  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:18.374498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:18.414453  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:18.414506  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:18.468667  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:18.468712  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:18.483366  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:18.483399  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:18.554900  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:18.554930  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:18.554948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.135828  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:21.148610  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:21.148690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:21.184455  152982 cri.go:89] found id: ""
	I0826 12:11:21.184484  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.184494  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:21.184503  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:21.184572  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:21.219762  152982 cri.go:89] found id: ""
	I0826 12:11:21.219808  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.219821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:21.219829  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:21.219914  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:21.258106  152982 cri.go:89] found id: ""
	I0826 12:11:21.258136  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.258147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:21.258154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:21.258221  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:21.293698  152982 cri.go:89] found id: ""
	I0826 12:11:21.293741  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.293753  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:21.293764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:21.293841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:21.328069  152982 cri.go:89] found id: ""
	I0826 12:11:21.328101  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.328115  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:21.328123  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:21.328191  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:21.363723  152982 cri.go:89] found id: ""
	I0826 12:11:21.363757  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.363767  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:21.363776  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:21.363843  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:21.398321  152982 cri.go:89] found id: ""
	I0826 12:11:21.398349  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.398358  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:21.398364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:21.398428  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:21.434139  152982 cri.go:89] found id: ""
	I0826 12:11:21.434169  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.434177  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:21.434189  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:21.434211  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:21.488855  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:21.488900  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:21.503146  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:21.503186  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:21.576190  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:21.576212  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:21.576226  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.660280  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:21.660330  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:19.203558  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.704020  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.254119  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:23.752986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:22.622972  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.623227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.205285  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:24.219929  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:24.220056  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:24.263032  152982 cri.go:89] found id: ""
	I0826 12:11:24.263064  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.263076  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:24.263084  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:24.263154  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:24.301435  152982 cri.go:89] found id: ""
	I0826 12:11:24.301469  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.301479  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:24.301486  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:24.301557  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:24.337463  152982 cri.go:89] found id: ""
	I0826 12:11:24.337494  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.337505  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:24.337513  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:24.337589  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:24.375142  152982 cri.go:89] found id: ""
	I0826 12:11:24.375181  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.375192  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:24.375201  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:24.375277  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:24.414859  152982 cri.go:89] found id: ""
	I0826 12:11:24.414891  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.414902  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:24.414910  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:24.414980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:24.453757  152982 cri.go:89] found id: ""
	I0826 12:11:24.453801  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.453826  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:24.453836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:24.453936  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:24.489558  152982 cri.go:89] found id: ""
	I0826 12:11:24.489592  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.489601  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:24.489606  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:24.489659  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:24.525054  152982 cri.go:89] found id: ""
	I0826 12:11:24.525086  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.525097  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:24.525109  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:24.525131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:24.596120  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:24.596147  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:24.596162  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:24.671993  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:24.672040  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:24.714108  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:24.714139  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:24.764937  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:24.764979  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:23.704101  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.704765  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.759905  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:28.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.121723  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:29.122568  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.280105  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:27.293479  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:27.293569  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:27.335432  152982 cri.go:89] found id: ""
	I0826 12:11:27.335464  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.335477  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:27.335485  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:27.335565  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:27.371729  152982 cri.go:89] found id: ""
	I0826 12:11:27.371763  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.371774  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:27.371783  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:27.371857  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:27.408210  152982 cri.go:89] found id: ""
	I0826 12:11:27.408238  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.408250  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:27.408258  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:27.408327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:27.442135  152982 cri.go:89] found id: ""
	I0826 12:11:27.442170  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.442186  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:27.442196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:27.442266  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:27.476245  152982 cri.go:89] found id: ""
	I0826 12:11:27.476279  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.476290  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:27.476299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:27.476421  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:27.510917  152982 cri.go:89] found id: ""
	I0826 12:11:27.510949  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.510958  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:27.510965  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:27.511033  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:27.552891  152982 cri.go:89] found id: ""
	I0826 12:11:27.552925  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.552933  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:27.552939  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:27.552996  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:27.588303  152982 cri.go:89] found id: ""
	I0826 12:11:27.588339  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.588354  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:27.588365  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:27.588383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:27.666493  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:27.666540  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:27.710139  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:27.710176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:27.761327  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:27.761368  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:27.775628  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:27.775667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:27.851736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.351953  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:30.365614  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:30.365705  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:30.400100  152982 cri.go:89] found id: ""
	I0826 12:11:30.400130  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.400140  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:30.400148  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:30.400224  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:30.433714  152982 cri.go:89] found id: ""
	I0826 12:11:30.433746  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.433762  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:30.433770  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:30.433841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:30.467434  152982 cri.go:89] found id: ""
	I0826 12:11:30.467465  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.467475  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:30.467482  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:30.467549  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:30.501079  152982 cri.go:89] found id: ""
	I0826 12:11:30.501115  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.501128  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:30.501136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:30.501195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:30.536521  152982 cri.go:89] found id: ""
	I0826 12:11:30.536556  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.536568  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:30.536576  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:30.536649  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:30.572998  152982 cri.go:89] found id: ""
	I0826 12:11:30.573030  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.573040  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:30.573048  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:30.573116  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:30.608982  152982 cri.go:89] found id: ""
	I0826 12:11:30.609017  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.609028  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:30.609035  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:30.609110  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:30.648780  152982 cri.go:89] found id: ""
	I0826 12:11:30.648812  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.648824  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:30.648837  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:30.648853  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:30.705822  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:30.705859  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:30.719927  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:30.719956  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:30.799604  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.799633  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:30.799650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:30.876392  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:30.876438  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:28.203982  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.204105  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.703547  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.255268  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.753846  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:31.622470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.623169  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.417878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:33.431323  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:33.431416  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:33.466166  152982 cri.go:89] found id: ""
	I0826 12:11:33.466195  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.466204  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:33.466215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:33.466292  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:33.504322  152982 cri.go:89] found id: ""
	I0826 12:11:33.504351  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.504360  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:33.504367  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:33.504429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:33.542292  152982 cri.go:89] found id: ""
	I0826 12:11:33.542324  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.542332  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:33.542340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:33.542408  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:33.577794  152982 cri.go:89] found id: ""
	I0826 12:11:33.577827  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.577835  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:33.577841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:33.577901  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:33.611525  152982 cri.go:89] found id: ""
	I0826 12:11:33.611561  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.611571  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:33.611580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:33.611661  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:33.650920  152982 cri.go:89] found id: ""
	I0826 12:11:33.650954  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.650966  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:33.650974  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:33.651043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:33.688349  152982 cri.go:89] found id: ""
	I0826 12:11:33.688389  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.688401  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:33.688409  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:33.688479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:33.726501  152982 cri.go:89] found id: ""
	I0826 12:11:33.726533  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.726542  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:33.726553  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:33.726570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:33.740359  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:33.740392  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:33.810992  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:33.811018  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:33.811030  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:33.895742  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:33.895786  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:33.934059  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:33.934090  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.490917  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:36.503916  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:36.504000  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:36.539493  152982 cri.go:89] found id: ""
	I0826 12:11:36.539521  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.539529  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:36.539535  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:36.539597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:36.575517  152982 cri.go:89] found id: ""
	I0826 12:11:36.575556  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.575567  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:36.575576  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:36.575647  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:36.611750  152982 cri.go:89] found id: ""
	I0826 12:11:36.611790  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.611803  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:36.611812  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:36.611880  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:36.649512  152982 cri.go:89] found id: ""
	I0826 12:11:36.649548  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.649561  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:36.649575  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:36.649656  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:36.686741  152982 cri.go:89] found id: ""
	I0826 12:11:36.686774  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.686784  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:36.686791  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:36.686879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:35.204399  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:37.206473  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:34.753931  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.754270  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:39.253118  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.122628  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:38.122940  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:40.623071  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.723395  152982 cri.go:89] found id: ""
	I0826 12:11:36.723423  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.723431  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:36.723438  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:36.723503  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:36.761858  152982 cri.go:89] found id: ""
	I0826 12:11:36.761895  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.761906  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:36.761914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:36.761987  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:36.797265  152982 cri.go:89] found id: ""
	I0826 12:11:36.797297  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.797305  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:36.797315  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:36.797331  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.849263  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:36.849313  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:36.863273  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:36.863305  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:36.935214  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:36.935241  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:36.935259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:37.011799  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:37.011845  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.550075  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:39.563363  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:39.563441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:39.597015  152982 cri.go:89] found id: ""
	I0826 12:11:39.597049  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.597061  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:39.597068  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:39.597138  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:39.634936  152982 cri.go:89] found id: ""
	I0826 12:11:39.634976  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.634988  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:39.634996  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:39.635070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:39.670376  152982 cri.go:89] found id: ""
	I0826 12:11:39.670406  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.670414  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:39.670421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:39.670479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:39.706468  152982 cri.go:89] found id: ""
	I0826 12:11:39.706497  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.706504  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:39.706510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:39.706601  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:39.741133  152982 cri.go:89] found id: ""
	I0826 12:11:39.741166  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.741178  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:39.741187  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:39.741261  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:39.776398  152982 cri.go:89] found id: ""
	I0826 12:11:39.776436  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.776449  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:39.776460  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:39.776533  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:39.811257  152982 cri.go:89] found id: ""
	I0826 12:11:39.811291  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.811305  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:39.811314  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:39.811394  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:39.845825  152982 cri.go:89] found id: ""
	I0826 12:11:39.845858  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.845880  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:39.845893  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:39.845912  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.886439  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:39.886481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:39.936942  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:39.936985  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:39.950459  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:39.950494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:40.022791  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:40.022820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:40.022851  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:39.705276  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.705617  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.253680  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.753495  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.122503  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:45.123917  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:42.602146  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:42.615049  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:42.615124  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:42.655338  152982 cri.go:89] found id: ""
	I0826 12:11:42.655369  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.655377  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:42.655383  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:42.655438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:42.692964  152982 cri.go:89] found id: ""
	I0826 12:11:42.693001  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.693012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:42.693020  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:42.693095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:42.730011  152982 cri.go:89] found id: ""
	I0826 12:11:42.730040  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.730049  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:42.730055  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:42.730119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:42.765304  152982 cri.go:89] found id: ""
	I0826 12:11:42.765333  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.765341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:42.765348  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:42.765406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:42.805860  152982 cri.go:89] found id: ""
	I0826 12:11:42.805900  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.805912  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:42.805921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:42.805984  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:42.844736  152982 cri.go:89] found id: ""
	I0826 12:11:42.844768  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.844779  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:42.844789  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:42.844855  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:42.879760  152982 cri.go:89] found id: ""
	I0826 12:11:42.879790  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.879801  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:42.879809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:42.879873  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:42.918512  152982 cri.go:89] found id: ""
	I0826 12:11:42.918580  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.918595  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:42.918619  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:42.918640  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:42.971381  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:42.971423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:42.986027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:42.986069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:43.058511  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:43.058533  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:43.058548  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:43.137904  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:43.137948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:45.683127  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:45.697237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:45.697323  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:45.737944  152982 cri.go:89] found id: ""
	I0826 12:11:45.737977  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.737989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:45.737997  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:45.738069  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:45.775940  152982 cri.go:89] found id: ""
	I0826 12:11:45.775972  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.775980  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:45.775991  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:45.776047  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:45.811609  152982 cri.go:89] found id: ""
	I0826 12:11:45.811647  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.811658  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:45.811666  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:45.811747  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:45.845566  152982 cri.go:89] found id: ""
	I0826 12:11:45.845600  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.845612  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:45.845620  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:45.845698  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:45.880243  152982 cri.go:89] found id: ""
	I0826 12:11:45.880287  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.880300  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:45.880310  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:45.880406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:45.916121  152982 cri.go:89] found id: ""
	I0826 12:11:45.916150  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.916161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:45.916170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:45.916242  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:45.950397  152982 cri.go:89] found id: ""
	I0826 12:11:45.950430  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.950441  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:45.950449  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:45.950524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:45.987306  152982 cri.go:89] found id: ""
	I0826 12:11:45.987350  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.987363  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:45.987394  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:45.987435  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:46.044580  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:46.044632  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:46.059612  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:46.059648  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:46.133348  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:46.133377  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:46.133396  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:46.217841  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:46.217890  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:44.203535  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.703738  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.252936  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.753329  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:47.623134  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:49.628072  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.758749  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:48.772086  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:48.772172  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:48.806520  152982 cri.go:89] found id: ""
	I0826 12:11:48.806552  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.806563  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:48.806573  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:48.806655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:48.844305  152982 cri.go:89] found id: ""
	I0826 12:11:48.844335  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.844343  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:48.844349  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:48.844409  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:48.882416  152982 cri.go:89] found id: ""
	I0826 12:11:48.882453  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.882462  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:48.882469  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:48.882523  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:48.917756  152982 cri.go:89] found id: ""
	I0826 12:11:48.917798  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.917811  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:48.917818  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:48.917882  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:48.951065  152982 cri.go:89] found id: ""
	I0826 12:11:48.951095  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.951107  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:48.951115  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:48.951185  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:48.984812  152982 cri.go:89] found id: ""
	I0826 12:11:48.984845  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.984857  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:48.984865  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:48.984935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:49.021449  152982 cri.go:89] found id: ""
	I0826 12:11:49.021483  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.021495  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:49.021505  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:49.021579  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:49.053543  152982 cri.go:89] found id: ""
	I0826 12:11:49.053584  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.053596  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:49.053609  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:49.053625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:49.107227  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:49.107269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:49.121370  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:49.121402  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:49.192279  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:49.192323  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:49.192342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:49.267817  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:49.267861  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:49.204182  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.204589  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:50.753778  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.753986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.122110  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:54.122701  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.805801  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:51.821042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:51.821119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:51.863950  152982 cri.go:89] found id: ""
	I0826 12:11:51.863986  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.863999  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:51.864007  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:51.864082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:51.910582  152982 cri.go:89] found id: ""
	I0826 12:11:51.910621  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.910633  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:51.910649  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:51.910708  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:51.946964  152982 cri.go:89] found id: ""
	I0826 12:11:51.947001  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.947014  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:51.947022  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:51.947095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:51.982892  152982 cri.go:89] found id: ""
	I0826 12:11:51.982926  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.982936  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:51.982944  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:51.983016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:52.017975  152982 cri.go:89] found id: ""
	I0826 12:11:52.018000  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.018009  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:52.018015  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:52.018082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:52.053286  152982 cri.go:89] found id: ""
	I0826 12:11:52.053315  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.053323  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:52.053329  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:52.053391  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:52.088088  152982 cri.go:89] found id: ""
	I0826 12:11:52.088131  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.088144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:52.088153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:52.088235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:52.125911  152982 cri.go:89] found id: ""
	I0826 12:11:52.125938  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.125955  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:52.125967  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:52.125984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:52.167172  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:52.167208  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:52.222819  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:52.222871  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:52.237609  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:52.237650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:52.312439  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:52.312473  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:52.312491  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:54.892552  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:54.907733  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:54.907827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:54.945009  152982 cri.go:89] found id: ""
	I0826 12:11:54.945040  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.945050  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:54.945057  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:54.945128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:54.987578  152982 cri.go:89] found id: ""
	I0826 12:11:54.987608  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.987619  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:54.987627  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:54.987702  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:55.021222  152982 cri.go:89] found id: ""
	I0826 12:11:55.021254  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.021266  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:55.021274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:55.021348  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:55.058906  152982 cri.go:89] found id: ""
	I0826 12:11:55.058933  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.058941  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:55.058948  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:55.059017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:55.094689  152982 cri.go:89] found id: ""
	I0826 12:11:55.094720  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.094727  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:55.094734  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:55.094808  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:55.133269  152982 cri.go:89] found id: ""
	I0826 12:11:55.133298  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.133306  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:55.133313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:55.133376  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:55.170456  152982 cri.go:89] found id: ""
	I0826 12:11:55.170491  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.170501  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:55.170510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:55.170584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:55.205421  152982 cri.go:89] found id: ""
	I0826 12:11:55.205453  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.205463  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:55.205474  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:55.205490  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:55.258635  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:55.258672  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:55.272799  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:55.272838  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:55.345916  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:55.345948  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:55.345966  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:55.421677  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:55.421716  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:53.205479  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.703014  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.704352  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.254310  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.753129  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:56.124191  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:58.622612  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.960895  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:57.974338  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:57.974429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:58.010914  152982 cri.go:89] found id: ""
	I0826 12:11:58.010946  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.010955  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:58.010966  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:58.011046  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:58.046393  152982 cri.go:89] found id: ""
	I0826 12:11:58.046437  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.046451  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:58.046457  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:58.046512  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:58.081967  152982 cri.go:89] found id: ""
	I0826 12:11:58.081999  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.082008  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:58.082014  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:58.082074  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:58.118301  152982 cri.go:89] found id: ""
	I0826 12:11:58.118333  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.118344  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:58.118352  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:58.118420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:58.154991  152982 cri.go:89] found id: ""
	I0826 12:11:58.155022  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.155030  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:58.155036  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:58.155095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:58.192768  152982 cri.go:89] found id: ""
	I0826 12:11:58.192814  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.192827  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:58.192836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:58.192911  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:58.230393  152982 cri.go:89] found id: ""
	I0826 12:11:58.230422  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.230433  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:58.230441  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:58.230510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:58.267899  152982 cri.go:89] found id: ""
	I0826 12:11:58.267935  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.267947  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:58.267959  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:58.267976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:58.357819  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:58.357866  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:58.405641  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:58.405682  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:58.458403  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:58.458446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:58.472170  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:58.472209  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:58.544141  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.044595  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:01.059636  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:01.059732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:01.099210  152982 cri.go:89] found id: ""
	I0826 12:12:01.099244  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.099252  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:01.099260  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:01.099315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:01.135865  152982 cri.go:89] found id: ""
	I0826 12:12:01.135895  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.135904  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:01.135915  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:01.135969  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:01.169745  152982 cri.go:89] found id: ""
	I0826 12:12:01.169775  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.169784  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:01.169790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:01.169844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:01.208386  152982 cri.go:89] found id: ""
	I0826 12:12:01.208419  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.208431  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:01.208440  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:01.208508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:01.250695  152982 cri.go:89] found id: ""
	I0826 12:12:01.250727  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.250738  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:01.250746  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:01.250821  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:01.284796  152982 cri.go:89] found id: ""
	I0826 12:12:01.284825  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.284838  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:01.284845  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:01.284904  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:01.318188  152982 cri.go:89] found id: ""
	I0826 12:12:01.318219  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.318233  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:01.318242  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:01.318313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:01.354986  152982 cri.go:89] found id: ""
	I0826 12:12:01.355024  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.355036  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:01.355055  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:01.355073  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:01.406575  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:01.406625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:01.421246  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:01.421299  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:01.500127  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.500160  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:01.500178  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:01.579560  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:01.579605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:00.202896  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.204136  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:59.758855  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.253583  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:01.123695  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:03.622227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.124292  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:04.138317  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:04.138406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:04.172150  152982 cri.go:89] found id: ""
	I0826 12:12:04.172185  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.172197  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:04.172205  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:04.172281  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:04.206215  152982 cri.go:89] found id: ""
	I0826 12:12:04.206245  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.206253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:04.206259  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:04.206314  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:04.245728  152982 cri.go:89] found id: ""
	I0826 12:12:04.245766  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.245780  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:04.245797  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:04.245875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:04.288292  152982 cri.go:89] found id: ""
	I0826 12:12:04.288328  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.288341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:04.288358  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:04.288420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:04.323224  152982 cri.go:89] found id: ""
	I0826 12:12:04.323270  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.323279  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:04.323285  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:04.323353  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:04.356637  152982 cri.go:89] found id: ""
	I0826 12:12:04.356670  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.356681  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:04.356751  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:04.356829  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:04.397159  152982 cri.go:89] found id: ""
	I0826 12:12:04.397202  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.397217  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:04.397225  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:04.397307  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:04.443593  152982 cri.go:89] found id: ""
	I0826 12:12:04.443635  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.443644  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:04.443654  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:04.443667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:04.527790  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:04.527820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:04.527840  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:04.603384  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:04.603426  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:04.642782  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:04.642818  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:04.692196  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:04.692239  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:04.704890  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.204192  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.753969  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.253318  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:09.253759  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:06.123014  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:08.622705  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.208845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:07.221853  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:07.221925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:07.257184  152982 cri.go:89] found id: ""
	I0826 12:12:07.257220  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.257236  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:07.257244  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:07.257313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:07.289962  152982 cri.go:89] found id: ""
	I0826 12:12:07.290000  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.290012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:07.290018  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:07.290082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:07.323408  152982 cri.go:89] found id: ""
	I0826 12:12:07.323440  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.323452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:07.323461  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:07.323527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:07.358324  152982 cri.go:89] found id: ""
	I0826 12:12:07.358353  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.358362  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:07.358368  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:07.358436  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:07.393608  152982 cri.go:89] found id: ""
	I0826 12:12:07.393657  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.393666  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:07.393671  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:07.393739  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:07.427738  152982 cri.go:89] found id: ""
	I0826 12:12:07.427772  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.427782  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:07.427790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:07.427879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:07.466467  152982 cri.go:89] found id: ""
	I0826 12:12:07.466508  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.466520  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:07.466528  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:07.466603  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:07.501589  152982 cri.go:89] found id: ""
	I0826 12:12:07.501630  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.501645  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:07.501658  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:07.501678  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:07.550668  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:07.550708  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:07.564191  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:07.564224  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:07.638593  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:07.638626  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:07.638645  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:07.722262  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:07.722311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:10.265369  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:10.278719  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:10.278807  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:10.314533  152982 cri.go:89] found id: ""
	I0826 12:12:10.314568  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.314581  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:10.314589  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:10.314664  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:10.355983  152982 cri.go:89] found id: ""
	I0826 12:12:10.356014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.356023  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:10.356029  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:10.356091  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:10.391815  152982 cri.go:89] found id: ""
	I0826 12:12:10.391850  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.391860  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:10.391867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:10.391933  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:10.430280  152982 cri.go:89] found id: ""
	I0826 12:12:10.430309  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.430318  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:10.430324  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:10.430383  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:10.467983  152982 cri.go:89] found id: ""
	I0826 12:12:10.468014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.468025  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:10.468034  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:10.468103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:10.501682  152982 cri.go:89] found id: ""
	I0826 12:12:10.501712  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.501720  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:10.501726  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:10.501779  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:10.536760  152982 cri.go:89] found id: ""
	I0826 12:12:10.536790  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.536802  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:10.536810  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:10.536885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:10.572626  152982 cri.go:89] found id: ""
	I0826 12:12:10.572663  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.572677  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:10.572690  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:10.572707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:10.628207  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:10.628242  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:10.641767  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:10.641799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:10.716431  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:10.716463  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:10.716481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:10.801367  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:10.801416  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:09.205156  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.704152  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.754090  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:14.253111  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.122118  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.123368  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:15.623046  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.346625  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:13.359838  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:13.359925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:13.393199  152982 cri.go:89] found id: ""
	I0826 12:12:13.393228  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.393241  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:13.393249  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:13.393321  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:13.429651  152982 cri.go:89] found id: ""
	I0826 12:12:13.429696  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.429709  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:13.429718  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:13.429778  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:13.463913  152982 cri.go:89] found id: ""
	I0826 12:12:13.463947  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.463959  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:13.463967  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:13.464035  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:13.498933  152982 cri.go:89] found id: ""
	I0826 12:12:13.498966  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.498977  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:13.498987  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:13.499064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:13.535136  152982 cri.go:89] found id: ""
	I0826 12:12:13.535166  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.535177  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:13.535185  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:13.535260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:13.573468  152982 cri.go:89] found id: ""
	I0826 12:12:13.573504  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.573516  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:13.573525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:13.573597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:13.612852  152982 cri.go:89] found id: ""
	I0826 12:12:13.612900  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.612913  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:13.612921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:13.612994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:13.649176  152982 cri.go:89] found id: ""
	I0826 12:12:13.649204  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.649220  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:13.649230  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:13.649247  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:13.663880  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:13.663908  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:13.741960  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:13.741982  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:13.741999  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:13.829286  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:13.829342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:13.868186  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:13.868218  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.422802  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:16.436680  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:16.436759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:16.471551  152982 cri.go:89] found id: ""
	I0826 12:12:16.471585  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.471605  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:16.471623  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:16.471695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:16.507468  152982 cri.go:89] found id: ""
	I0826 12:12:16.507504  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.507517  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:16.507526  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:16.507600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:16.542283  152982 cri.go:89] found id: ""
	I0826 12:12:16.542314  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.542325  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:16.542336  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:16.542406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:16.590390  152982 cri.go:89] found id: ""
	I0826 12:12:16.590429  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.590443  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:16.590452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:16.590593  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:16.625344  152982 cri.go:89] found id: ""
	I0826 12:12:16.625371  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.625382  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:16.625389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:16.625463  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:16.660153  152982 cri.go:89] found id: ""
	I0826 12:12:16.660194  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.660204  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:16.660211  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:16.660268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:16.696541  152982 cri.go:89] found id: ""
	I0826 12:12:16.696572  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.696580  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:16.696586  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:16.696655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:14.202993  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.204125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.255066  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:18.752641  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:17.624099  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.122254  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.732416  152982 cri.go:89] found id: ""
	I0826 12:12:16.732448  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.732456  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:16.732469  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:16.732486  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:16.809058  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:16.809106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:16.848200  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:16.848269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.904985  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:16.905033  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:16.918966  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:16.919000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:16.989371  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.490349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:19.502851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:19.502946  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:19.534939  152982 cri.go:89] found id: ""
	I0826 12:12:19.534966  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.534974  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:19.534981  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:19.535036  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:19.567128  152982 cri.go:89] found id: ""
	I0826 12:12:19.567161  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.567177  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:19.567185  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:19.567257  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:19.601548  152982 cri.go:89] found id: ""
	I0826 12:12:19.601580  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.601590  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:19.601598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:19.601670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:19.636903  152982 cri.go:89] found id: ""
	I0826 12:12:19.636930  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.636938  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:19.636949  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:19.637018  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:19.670155  152982 cri.go:89] found id: ""
	I0826 12:12:19.670181  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.670190  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:19.670196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:19.670258  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:19.705052  152982 cri.go:89] found id: ""
	I0826 12:12:19.705079  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.705090  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:19.705099  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:19.705163  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:19.744106  152982 cri.go:89] found id: ""
	I0826 12:12:19.744136  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.744144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:19.744151  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:19.744227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:19.780078  152982 cri.go:89] found id: ""
	I0826 12:12:19.780107  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.780116  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:19.780126  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:19.780138  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:19.831821  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:19.831884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:19.847572  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:19.847610  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:19.924723  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.924745  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:19.924763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:20.001249  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:20.001292  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:18.204529  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.205670  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.703658  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.753284  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.753357  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.122490  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:24.122773  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.540357  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:22.554408  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:22.554483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:22.588270  152982 cri.go:89] found id: ""
	I0826 12:12:22.588298  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.588310  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:22.588329  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:22.588411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:22.623979  152982 cri.go:89] found id: ""
	I0826 12:12:22.624003  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.624011  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:22.624016  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:22.624077  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:22.657151  152982 cri.go:89] found id: ""
	I0826 12:12:22.657185  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.657196  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:22.657204  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:22.657265  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:22.694187  152982 cri.go:89] found id: ""
	I0826 12:12:22.694217  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.694229  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:22.694237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:22.694327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:22.734911  152982 cri.go:89] found id: ""
	I0826 12:12:22.734948  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.734960  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:22.734968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:22.735039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:22.772754  152982 cri.go:89] found id: ""
	I0826 12:12:22.772790  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.772802  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:22.772809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:22.772877  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:22.810340  152982 cri.go:89] found id: ""
	I0826 12:12:22.810376  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.810385  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:22.810392  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:22.810467  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:22.847910  152982 cri.go:89] found id: ""
	I0826 12:12:22.847942  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.847953  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:22.847966  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:22.847984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:22.900871  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:22.900927  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:22.914758  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:22.914790  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:22.981736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:22.981766  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:22.981780  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:23.062669  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:23.062717  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:25.604600  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:25.617474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:25.617584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:25.653870  152982 cri.go:89] found id: ""
	I0826 12:12:25.653904  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.653917  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:25.653925  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:25.653993  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:25.693937  152982 cri.go:89] found id: ""
	I0826 12:12:25.693965  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.693973  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:25.693979  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:25.694039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:25.730590  152982 cri.go:89] found id: ""
	I0826 12:12:25.730622  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.730633  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:25.730640  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:25.730729  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:25.768192  152982 cri.go:89] found id: ""
	I0826 12:12:25.768221  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.768231  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:25.768240  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:25.768296  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:25.808518  152982 cri.go:89] found id: ""
	I0826 12:12:25.808545  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.808553  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:25.808559  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:25.808622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:25.843434  152982 cri.go:89] found id: ""
	I0826 12:12:25.843464  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.843475  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:25.843487  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:25.843561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:25.879093  152982 cri.go:89] found id: ""
	I0826 12:12:25.879124  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.879138  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:25.879146  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:25.879212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:25.915871  152982 cri.go:89] found id: ""
	I0826 12:12:25.915919  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.915932  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:25.915945  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:25.915973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:25.998597  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:25.998652  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:26.038701  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:26.038736  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:26.091618  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:26.091665  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:26.105349  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:26.105383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:26.178337  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:24.704209  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.204036  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:25.253322  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.754717  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:26.123520  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.622019  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.622453  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.679177  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:28.695361  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:28.695455  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:28.734977  152982 cri.go:89] found id: ""
	I0826 12:12:28.735008  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.735026  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:28.735032  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:28.735107  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:28.771634  152982 cri.go:89] found id: ""
	I0826 12:12:28.771665  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.771677  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:28.771685  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:28.771763  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:28.810976  152982 cri.go:89] found id: ""
	I0826 12:12:28.811010  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.811022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:28.811030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:28.811098  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:28.850204  152982 cri.go:89] found id: ""
	I0826 12:12:28.850233  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.850241  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:28.850247  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:28.850300  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:28.888814  152982 cri.go:89] found id: ""
	I0826 12:12:28.888845  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.888852  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:28.888862  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:28.888923  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:28.925203  152982 cri.go:89] found id: ""
	I0826 12:12:28.925251  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.925264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:28.925273  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:28.925359  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:28.963656  152982 cri.go:89] found id: ""
	I0826 12:12:28.963684  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.963700  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:28.963706  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:28.963761  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:28.997644  152982 cri.go:89] found id: ""
	I0826 12:12:28.997677  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.997686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:28.997696  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:28.997711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:29.036668  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:29.036711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:29.089020  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:29.089064  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:29.103051  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:29.103083  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:29.173327  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:29.173363  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:29.173380  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:29.703493  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.709124  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.252850  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:32.254087  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:33.121656  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:35.122979  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.755073  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:31.769098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:31.769194  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:31.811919  152982 cri.go:89] found id: ""
	I0826 12:12:31.811950  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.811970  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:31.811978  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:31.812059  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:31.849728  152982 cri.go:89] found id: ""
	I0826 12:12:31.849760  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.849771  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:31.849778  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:31.849844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:31.884973  152982 cri.go:89] found id: ""
	I0826 12:12:31.885013  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.885022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:31.885030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:31.885088  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:31.925013  152982 cri.go:89] found id: ""
	I0826 12:12:31.925043  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.925052  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:31.925060  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:31.925121  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:31.960066  152982 cri.go:89] found id: ""
	I0826 12:12:31.960101  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.960112  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:31.960130  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:31.960205  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:31.994706  152982 cri.go:89] found id: ""
	I0826 12:12:31.994739  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.994747  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:31.994753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:31.994810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:32.030101  152982 cri.go:89] found id: ""
	I0826 12:12:32.030134  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.030142  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:32.030148  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:32.030213  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:32.064056  152982 cri.go:89] found id: ""
	I0826 12:12:32.064087  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.064095  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:32.064105  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:32.064118  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:32.115930  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:32.115974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:32.144522  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:32.144594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:32.216857  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:32.216886  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:32.216946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:32.293229  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:32.293268  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.833049  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:34.846325  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:34.846389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:34.879253  152982 cri.go:89] found id: ""
	I0826 12:12:34.879282  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.879299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:34.879308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:34.879377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:34.913351  152982 cri.go:89] found id: ""
	I0826 12:12:34.913381  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.913393  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:34.913401  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:34.913487  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:34.946929  152982 cri.go:89] found id: ""
	I0826 12:12:34.946958  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.946966  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:34.946972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:34.947040  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:34.980517  152982 cri.go:89] found id: ""
	I0826 12:12:34.980559  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.980571  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:34.980580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:34.980651  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:35.015853  152982 cri.go:89] found id: ""
	I0826 12:12:35.015886  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.015894  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:35.015909  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:35.015972  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:35.053568  152982 cri.go:89] found id: ""
	I0826 12:12:35.053597  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.053606  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:35.053613  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:35.053667  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:35.091369  152982 cri.go:89] found id: ""
	I0826 12:12:35.091398  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.091408  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:35.091415  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:35.091483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:35.129233  152982 cri.go:89] found id: ""
	I0826 12:12:35.129259  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.129267  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:35.129276  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:35.129288  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:35.181977  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:35.182016  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:35.195780  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:35.195812  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:35.274390  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:35.274416  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:35.274433  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:35.353774  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:35.353819  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.203244  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:36.703229  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:34.754010  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.253336  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.253674  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.622257  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.622967  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.894664  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:37.908390  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:37.908480  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:37.943642  152982 cri.go:89] found id: ""
	I0826 12:12:37.943669  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.943681  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:37.943689  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:37.943759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:37.978371  152982 cri.go:89] found id: ""
	I0826 12:12:37.978407  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.978418  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:37.978426  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:37.978497  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:38.014205  152982 cri.go:89] found id: ""
	I0826 12:12:38.014237  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.014248  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:38.014255  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:38.014326  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:38.048705  152982 cri.go:89] found id: ""
	I0826 12:12:38.048737  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.048748  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:38.048758  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:38.048824  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:38.085009  152982 cri.go:89] found id: ""
	I0826 12:12:38.085039  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.085050  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:38.085058  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:38.085147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:38.125923  152982 cri.go:89] found id: ""
	I0826 12:12:38.125949  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.125960  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:38.125968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:38.126038  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:38.161460  152982 cri.go:89] found id: ""
	I0826 12:12:38.161492  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.161504  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:38.161512  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:38.161584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:38.194433  152982 cri.go:89] found id: ""
	I0826 12:12:38.194462  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.194472  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:38.194481  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:38.194494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.245809  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:38.245854  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:38.261100  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:38.261141  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:38.329187  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:38.329218  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:38.329237  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:38.416798  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:38.416844  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:40.962763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:40.976214  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:40.976287  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:41.010312  152982 cri.go:89] found id: ""
	I0826 12:12:41.010346  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.010356  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:41.010363  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:41.010422  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:41.051708  152982 cri.go:89] found id: ""
	I0826 12:12:41.051738  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.051746  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:41.051753  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:41.051818  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:41.087107  152982 cri.go:89] found id: ""
	I0826 12:12:41.087140  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.087152  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:41.087161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:41.087238  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:41.125099  152982 cri.go:89] found id: ""
	I0826 12:12:41.125132  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.125144  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:41.125153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:41.125216  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:41.160192  152982 cri.go:89] found id: ""
	I0826 12:12:41.160220  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.160227  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:41.160234  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:41.160291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:41.193507  152982 cri.go:89] found id: ""
	I0826 12:12:41.193536  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.193548  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:41.193557  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:41.193650  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:41.235788  152982 cri.go:89] found id: ""
	I0826 12:12:41.235827  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.235835  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:41.235841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:41.235897  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:41.271720  152982 cri.go:89] found id: ""
	I0826 12:12:41.271755  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.271770  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:41.271780  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:41.271794  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:41.285694  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:41.285731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:41.351221  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:41.351245  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:41.351261  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:41.434748  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:41.434792  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:41.472446  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:41.472477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.704389  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.204525  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.752919  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:43.753710  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:42.123210  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.623786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.022222  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:44.036128  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:44.036201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:44.071142  152982 cri.go:89] found id: ""
	I0826 12:12:44.071177  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.071187  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:44.071196  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:44.071267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:44.105068  152982 cri.go:89] found id: ""
	I0826 12:12:44.105101  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.105110  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:44.105116  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:44.105184  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:44.140069  152982 cri.go:89] found id: ""
	I0826 12:12:44.140102  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.140113  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:44.140121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:44.140190  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:44.177686  152982 cri.go:89] found id: ""
	I0826 12:12:44.177724  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.177736  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:44.177744  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:44.177819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:44.214326  152982 cri.go:89] found id: ""
	I0826 12:12:44.214356  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.214364  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:44.214371  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:44.214426  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:44.251675  152982 cri.go:89] found id: ""
	I0826 12:12:44.251703  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.251711  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:44.251718  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:44.251776  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:44.303077  152982 cri.go:89] found id: ""
	I0826 12:12:44.303107  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.303116  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:44.303122  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:44.303183  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:44.355913  152982 cri.go:89] found id: ""
	I0826 12:12:44.355944  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.355952  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:44.355962  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:44.355974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:44.421610  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:44.421653  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:44.435567  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:44.435603  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:44.501406  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:44.501427  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:44.501440  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:44.582723  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:44.582763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:43.703519  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.202958  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.253330  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:48.753043  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.122857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:49.621786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.124026  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:47.139183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:47.139260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:47.175395  152982 cri.go:89] found id: ""
	I0826 12:12:47.175424  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.175440  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:47.175450  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:47.175514  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:47.214536  152982 cri.go:89] found id: ""
	I0826 12:12:47.214568  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.214580  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:47.214588  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:47.214655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:47.255297  152982 cri.go:89] found id: ""
	I0826 12:12:47.255321  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.255329  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:47.255335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:47.255402  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:47.290638  152982 cri.go:89] found id: ""
	I0826 12:12:47.290666  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.290675  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:47.290681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:47.290736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:47.327313  152982 cri.go:89] found id: ""
	I0826 12:12:47.327345  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.327352  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:47.327359  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:47.327425  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:47.366221  152982 cri.go:89] found id: ""
	I0826 12:12:47.366256  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.366264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:47.366274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:47.366331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:47.401043  152982 cri.go:89] found id: ""
	I0826 12:12:47.401077  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.401088  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:47.401095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:47.401166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:47.435800  152982 cri.go:89] found id: ""
	I0826 12:12:47.435837  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.435848  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:47.435860  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:47.435881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:47.487917  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:47.487955  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:47.501696  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:47.501731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:47.569026  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:47.569053  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:47.569069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:47.651002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:47.651049  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:50.192329  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:50.213937  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:50.214017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:50.253835  152982 cri.go:89] found id: ""
	I0826 12:12:50.253868  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.253879  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:50.253890  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:50.253957  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:50.296898  152982 cri.go:89] found id: ""
	I0826 12:12:50.296928  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.296939  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:50.296946  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:50.297016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:50.350327  152982 cri.go:89] found id: ""
	I0826 12:12:50.350356  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.350365  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:50.350375  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:50.350443  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:50.385191  152982 cri.go:89] found id: ""
	I0826 12:12:50.385225  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.385236  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:50.385243  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:50.385309  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:50.418371  152982 cri.go:89] found id: ""
	I0826 12:12:50.418412  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.418423  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:50.418432  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:50.418505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:50.450924  152982 cri.go:89] found id: ""
	I0826 12:12:50.450956  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.450965  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:50.450972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:50.451043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:50.485695  152982 cri.go:89] found id: ""
	I0826 12:12:50.485728  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.485739  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:50.485748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:50.485819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:50.519570  152982 cri.go:89] found id: ""
	I0826 12:12:50.519609  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.519622  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:50.519633  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:50.519650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:50.572959  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:50.573001  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:50.586794  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:50.586826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:50.654148  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:50.654180  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:50.654255  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:50.738067  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:50.738107  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:48.203018  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.205528  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.704054  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.758038  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.252772  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.121906  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:54.622553  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.281246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:53.296023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:53.296103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:53.333031  152982 cri.go:89] found id: ""
	I0826 12:12:53.333073  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.333092  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:53.333100  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:53.333171  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:53.367753  152982 cri.go:89] found id: ""
	I0826 12:12:53.367782  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.367791  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:53.367796  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:53.367849  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:53.403702  152982 cri.go:89] found id: ""
	I0826 12:12:53.403733  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.403745  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:53.403753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:53.403823  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:53.439911  152982 cri.go:89] found id: ""
	I0826 12:12:53.439939  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.439947  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:53.439953  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:53.440008  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:53.475053  152982 cri.go:89] found id: ""
	I0826 12:12:53.475079  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.475088  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:53.475094  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:53.475152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:53.509087  152982 cri.go:89] found id: ""
	I0826 12:12:53.509117  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.509128  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:53.509136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:53.509207  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:53.546090  152982 cri.go:89] found id: ""
	I0826 12:12:53.546123  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.546133  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:53.546139  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:53.546195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:53.581675  152982 cri.go:89] found id: ""
	I0826 12:12:53.581713  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.581727  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:53.581741  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:53.581756  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:53.632866  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:53.632929  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:53.646045  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:53.646079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:53.716768  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:53.716798  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:53.716814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:53.799490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:53.799541  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.340389  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:56.353305  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:56.353377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:56.389690  152982 cri.go:89] found id: ""
	I0826 12:12:56.389725  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.389733  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:56.389741  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:56.389797  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:56.423214  152982 cri.go:89] found id: ""
	I0826 12:12:56.423245  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.423253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:56.423260  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:56.423315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:56.459033  152982 cri.go:89] found id: ""
	I0826 12:12:56.459069  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.459077  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:56.459083  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:56.459141  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:56.494408  152982 cri.go:89] found id: ""
	I0826 12:12:56.494437  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.494446  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:56.494453  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:56.494507  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:56.533471  152982 cri.go:89] found id: ""
	I0826 12:12:56.533506  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.533517  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:56.533525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:56.533595  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:56.572644  152982 cri.go:89] found id: ""
	I0826 12:12:56.572675  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.572685  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:56.572690  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:56.572769  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:56.610948  152982 cri.go:89] found id: ""
	I0826 12:12:56.610978  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.610989  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:56.610997  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:56.611161  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:56.651352  152982 cri.go:89] found id: ""
	I0826 12:12:56.651391  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.651406  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:56.651419  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:56.651446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:56.666627  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:56.666664  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 12:12:54.704640  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:56.704830  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:55.253572  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.754403  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.122603  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.623004  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	W0826 12:12:56.741054  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:56.741087  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:56.741106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:56.818138  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:56.818194  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.858182  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:56.858216  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.412428  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:59.426340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:59.426410  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:59.459975  152982 cri.go:89] found id: ""
	I0826 12:12:59.460011  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.460021  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:59.460027  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:59.460082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:59.491890  152982 cri.go:89] found id: ""
	I0826 12:12:59.491918  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.491928  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:59.491934  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:59.491994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:59.527284  152982 cri.go:89] found id: ""
	I0826 12:12:59.527318  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.527330  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:59.527339  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:59.527411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:59.560996  152982 cri.go:89] found id: ""
	I0826 12:12:59.561027  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.561036  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:59.561042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:59.561096  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:59.595827  152982 cri.go:89] found id: ""
	I0826 12:12:59.595858  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.595866  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:59.595882  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:59.595970  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:59.632943  152982 cri.go:89] found id: ""
	I0826 12:12:59.632981  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.632993  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:59.633001  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:59.633071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:59.669013  152982 cri.go:89] found id: ""
	I0826 12:12:59.669047  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.669057  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:59.669065  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:59.669139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:59.703286  152982 cri.go:89] found id: ""
	I0826 12:12:59.703320  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.703331  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:59.703342  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:59.703359  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.756848  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:59.756882  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:59.770551  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:59.770592  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:59.842153  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:59.842176  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:59.842190  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:59.925190  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:59.925231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:59.203898  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.703960  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.755160  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.252684  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.253046  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.623605  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.122069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.464977  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:02.478901  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:02.478991  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:02.514845  152982 cri.go:89] found id: ""
	I0826 12:13:02.514890  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.514903  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:02.514912  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:02.514980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:02.550867  152982 cri.go:89] found id: ""
	I0826 12:13:02.550899  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.550910  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:02.550918  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:02.550988  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:02.585494  152982 cri.go:89] found id: ""
	I0826 12:13:02.585522  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.585531  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:02.585537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:02.585617  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:02.623561  152982 cri.go:89] found id: ""
	I0826 12:13:02.623603  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.623619  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:02.623630  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:02.623696  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:02.660636  152982 cri.go:89] found id: ""
	I0826 12:13:02.660665  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.660675  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:02.660683  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:02.660760  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:02.696140  152982 cri.go:89] found id: ""
	I0826 12:13:02.696173  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.696184  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:02.696192  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:02.696260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:02.735056  152982 cri.go:89] found id: ""
	I0826 12:13:02.735098  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.735111  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:02.735121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:02.735201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:02.770841  152982 cri.go:89] found id: ""
	I0826 12:13:02.770886  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.770899  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:02.770911  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:02.770928  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:02.845458  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:02.845498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:02.885537  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:02.885574  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:02.935507  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:02.935560  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:02.950010  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:02.950046  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:03.018963  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.520071  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:05.535473  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:05.535554  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:05.572890  152982 cri.go:89] found id: ""
	I0826 12:13:05.572923  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.572934  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:05.572942  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:05.573019  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:05.610469  152982 cri.go:89] found id: ""
	I0826 12:13:05.610503  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.610515  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:05.610522  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:05.610586  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:05.647446  152982 cri.go:89] found id: ""
	I0826 12:13:05.647480  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.647489  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:05.647495  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:05.647561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:05.686619  152982 cri.go:89] found id: ""
	I0826 12:13:05.686660  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.686672  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:05.686681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:05.686754  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:05.725893  152982 cri.go:89] found id: ""
	I0826 12:13:05.725927  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.725936  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:05.725943  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:05.726034  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:05.761052  152982 cri.go:89] found id: ""
	I0826 12:13:05.761081  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.761089  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:05.761095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:05.761147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:05.795336  152982 cri.go:89] found id: ""
	I0826 12:13:05.795367  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.795379  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:05.795387  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:05.795447  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:05.834397  152982 cri.go:89] found id: ""
	I0826 12:13:05.834441  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.834449  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:05.834459  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:05.834472  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:05.847882  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:05.847919  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:05.921941  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.921965  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:05.921982  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:06.001380  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:06.001424  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:06.040519  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:06.040555  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:04.203987  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.704484  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.752615  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.753340  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.122654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.122742  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.123434  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.591761  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:08.604628  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:08.604724  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:08.639915  152982 cri.go:89] found id: ""
	I0826 12:13:08.639948  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.639957  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:08.639963  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:08.640025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:08.684479  152982 cri.go:89] found id: ""
	I0826 12:13:08.684513  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.684526  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:08.684535  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:08.684613  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:08.724083  152982 cri.go:89] found id: ""
	I0826 12:13:08.724112  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.724121  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:08.724127  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:08.724182  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:08.760781  152982 cri.go:89] found id: ""
	I0826 12:13:08.760830  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.760842  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:08.760851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:08.760943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:08.795685  152982 cri.go:89] found id: ""
	I0826 12:13:08.795715  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.795723  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:08.795730  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:08.795786  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:08.832123  152982 cri.go:89] found id: ""
	I0826 12:13:08.832152  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.832161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:08.832167  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:08.832227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:08.869701  152982 cri.go:89] found id: ""
	I0826 12:13:08.869735  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.869752  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:08.869760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:08.869827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:08.905399  152982 cri.go:89] found id: ""
	I0826 12:13:08.905444  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.905455  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:08.905469  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:08.905485  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:08.956814  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:08.956857  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:08.971618  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:08.971656  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:09.039360  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:09.039389  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:09.039407  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:09.113464  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:09.113509  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:11.658989  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:11.671816  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:11.671898  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:11.707124  152982 cri.go:89] found id: ""
	I0826 12:13:11.707150  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.707158  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:11.707165  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:11.707230  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:09.203816  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.203914  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.757254  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:13.252482  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:12.624138  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.123672  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.743127  152982 cri.go:89] found id: ""
	I0826 12:13:11.743155  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.743163  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:11.743169  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:11.743249  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:11.777664  152982 cri.go:89] found id: ""
	I0826 12:13:11.777693  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.777701  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:11.777707  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:11.777766  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:11.811555  152982 cri.go:89] found id: ""
	I0826 12:13:11.811585  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.811593  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:11.811599  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:11.811658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:11.846187  152982 cri.go:89] found id: ""
	I0826 12:13:11.846216  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.846223  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:11.846229  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:11.846291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:11.882261  152982 cri.go:89] found id: ""
	I0826 12:13:11.882292  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.882310  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:11.882318  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:11.882386  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:11.920538  152982 cri.go:89] found id: ""
	I0826 12:13:11.920572  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.920583  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:11.920590  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:11.920658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:11.955402  152982 cri.go:89] found id: ""
	I0826 12:13:11.955435  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.955446  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:11.955456  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:11.955473  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:12.007676  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:12.007723  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:12.021378  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:12.021417  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:12.087841  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:12.087868  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:12.087883  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:12.170948  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:12.170991  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:14.712383  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:14.724904  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:14.724982  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:14.759675  152982 cri.go:89] found id: ""
	I0826 12:13:14.759703  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.759711  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:14.759717  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:14.759784  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:14.794440  152982 cri.go:89] found id: ""
	I0826 12:13:14.794471  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.794480  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:14.794488  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:14.794542  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:14.832392  152982 cri.go:89] found id: ""
	I0826 12:13:14.832442  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.832452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:14.832459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:14.832524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:14.870231  152982 cri.go:89] found id: ""
	I0826 12:13:14.870262  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.870273  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:14.870281  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:14.870339  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:14.909480  152982 cri.go:89] found id: ""
	I0826 12:13:14.909517  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.909529  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:14.909536  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:14.909596  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:14.950957  152982 cri.go:89] found id: ""
	I0826 12:13:14.950986  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.950997  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:14.951005  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:14.951071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:14.995137  152982 cri.go:89] found id: ""
	I0826 12:13:14.995165  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.995176  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:14.995183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:14.995252  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:15.029939  152982 cri.go:89] found id: ""
	I0826 12:13:15.029969  152982 logs.go:276] 0 containers: []
	W0826 12:13:15.029978  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:15.029987  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:15.030000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:15.106633  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:15.106675  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:15.152575  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:15.152613  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:15.205645  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:15.205689  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:15.220325  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:15.220369  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:15.289698  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:13.705307  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:16.203733  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.253098  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.253276  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.752313  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.621549  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.622504  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.790709  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:17.804332  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:17.804398  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:17.839735  152982 cri.go:89] found id: ""
	I0826 12:13:17.839779  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.839791  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:17.839803  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:17.839885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:17.875476  152982 cri.go:89] found id: ""
	I0826 12:13:17.875510  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.875521  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:17.875529  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:17.875606  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:17.911715  152982 cri.go:89] found id: ""
	I0826 12:13:17.911745  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.911753  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:17.911760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:17.911822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:17.949059  152982 cri.go:89] found id: ""
	I0826 12:13:17.949094  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.949102  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:17.949109  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:17.949166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:17.985319  152982 cri.go:89] found id: ""
	I0826 12:13:17.985365  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.985376  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:17.985385  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:17.985481  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:18.019796  152982 cri.go:89] found id: ""
	I0826 12:13:18.019839  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.019858  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:18.019867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:18.019931  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:18.053910  152982 cri.go:89] found id: ""
	I0826 12:13:18.053941  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.053953  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:18.053960  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:18.054039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:18.089854  152982 cri.go:89] found id: ""
	I0826 12:13:18.089888  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.089901  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:18.089917  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:18.089934  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:18.143026  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:18.143070  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.156710  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:18.156740  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:18.222894  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:18.222929  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:18.222946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:18.298729  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:18.298777  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:20.837506  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:20.851070  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:20.851152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:20.886253  152982 cri.go:89] found id: ""
	I0826 12:13:20.886289  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.886299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:20.886308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:20.886384  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:20.923773  152982 cri.go:89] found id: ""
	I0826 12:13:20.923803  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.923821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:20.923827  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:20.923884  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:20.959117  152982 cri.go:89] found id: ""
	I0826 12:13:20.959151  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.959162  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:20.959170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:20.959239  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:20.994088  152982 cri.go:89] found id: ""
	I0826 12:13:20.994121  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.994131  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:20.994138  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:20.994203  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:21.031140  152982 cri.go:89] found id: ""
	I0826 12:13:21.031171  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.031183  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:21.031198  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:21.031267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:21.064624  152982 cri.go:89] found id: ""
	I0826 12:13:21.064654  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.064666  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:21.064674  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:21.064743  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:21.100146  152982 cri.go:89] found id: ""
	I0826 12:13:21.100182  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.100190  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:21.100197  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:21.100268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:21.149001  152982 cri.go:89] found id: ""
	I0826 12:13:21.149031  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.149040  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:21.149054  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:21.149074  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:21.229783  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:21.229809  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:21.229826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:21.305579  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:21.305619  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:21.343856  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:21.343884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:21.394183  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:21.394231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.205132  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:20.704261  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:21.754167  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.253321  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:22.123356  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.621337  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:23.908368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:23.922748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:23.922840  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:23.964168  152982 cri.go:89] found id: ""
	I0826 12:13:23.964199  152982 logs.go:276] 0 containers: []
	W0826 12:13:23.964209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:23.964218  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:23.964290  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:24.001156  152982 cri.go:89] found id: ""
	I0826 12:13:24.001186  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.001199  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:24.001204  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:24.001268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:24.040001  152982 cri.go:89] found id: ""
	I0826 12:13:24.040037  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.040057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:24.040067  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:24.040139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:24.076901  152982 cri.go:89] found id: ""
	I0826 12:13:24.076940  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.076948  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:24.076955  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:24.077028  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:24.129347  152982 cri.go:89] found id: ""
	I0826 12:13:24.129375  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.129383  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:24.129389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:24.129457  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:24.169634  152982 cri.go:89] found id: ""
	I0826 12:13:24.169666  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.169678  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:24.169685  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:24.169740  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:24.206976  152982 cri.go:89] found id: ""
	I0826 12:13:24.207006  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.207015  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:24.207023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:24.207092  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:24.243755  152982 cri.go:89] found id: ""
	I0826 12:13:24.243790  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.243800  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:24.243812  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:24.243829  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:24.323085  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:24.323131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:24.362404  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:24.362436  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:24.411863  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:24.411910  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:24.425742  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:24.425776  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:24.492510  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:23.203855  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:25.704793  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.753722  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:28.753791  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.622857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:29.122053  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.993510  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:27.007233  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:27.007304  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:27.041360  152982 cri.go:89] found id: ""
	I0826 12:13:27.041392  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.041401  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:27.041407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:27.041470  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:27.076040  152982 cri.go:89] found id: ""
	I0826 12:13:27.076069  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.076080  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:27.076088  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:27.076160  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:27.114288  152982 cri.go:89] found id: ""
	I0826 12:13:27.114325  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.114336  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:27.114345  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:27.114418  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:27.148538  152982 cri.go:89] found id: ""
	I0826 12:13:27.148572  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.148582  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:27.148588  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:27.148665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:27.182331  152982 cri.go:89] found id: ""
	I0826 12:13:27.182362  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.182373  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:27.182382  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:27.182453  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:27.218645  152982 cri.go:89] found id: ""
	I0826 12:13:27.218698  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.218710  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:27.218720  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:27.218798  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:27.254987  152982 cri.go:89] found id: ""
	I0826 12:13:27.255021  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.255031  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:27.255037  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:27.255097  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:27.289633  152982 cri.go:89] found id: ""
	I0826 12:13:27.289662  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.289672  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:27.289683  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:27.289705  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:27.338387  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:27.338429  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:27.353764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:27.353799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:27.425833  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:27.425855  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:27.425870  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:27.507035  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:27.507078  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.047763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:30.063283  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:30.063382  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:30.100161  152982 cri.go:89] found id: ""
	I0826 12:13:30.100194  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.100207  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:30.100215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:30.100276  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:30.136507  152982 cri.go:89] found id: ""
	I0826 12:13:30.136542  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.136554  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:30.136561  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:30.136632  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:30.170023  152982 cri.go:89] found id: ""
	I0826 12:13:30.170058  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.170066  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:30.170071  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:30.170128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:30.204979  152982 cri.go:89] found id: ""
	I0826 12:13:30.205022  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.205032  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:30.205062  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:30.205135  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:30.242407  152982 cri.go:89] found id: ""
	I0826 12:13:30.242442  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.242455  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:30.242463  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:30.242532  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:30.280569  152982 cri.go:89] found id: ""
	I0826 12:13:30.280607  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.280619  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:30.280627  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:30.280684  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:30.317846  152982 cri.go:89] found id: ""
	I0826 12:13:30.317882  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.317892  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:30.317906  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:30.318011  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:30.354637  152982 cri.go:89] found id: ""
	I0826 12:13:30.354675  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.354686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:30.354698  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:30.354715  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:30.434983  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:30.435032  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.474170  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:30.474214  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:30.541092  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:30.541133  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:30.566671  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:30.566707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:30.659622  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:28.203031  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.204134  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:32.703767  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.754563  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.253557  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:31.122121  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.125357  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.622870  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.160831  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:33.174476  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:33.174556  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:33.213402  152982 cri.go:89] found id: ""
	I0826 12:13:33.213433  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.213441  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:33.213447  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:33.213505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:33.251024  152982 cri.go:89] found id: ""
	I0826 12:13:33.251056  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.251064  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:33.251070  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:33.251134  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:33.288839  152982 cri.go:89] found id: ""
	I0826 12:13:33.288873  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.288882  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:33.288889  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:33.288961  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:33.324289  152982 cri.go:89] found id: ""
	I0826 12:13:33.324321  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.324329  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:33.324335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:33.324404  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:33.358921  152982 cri.go:89] found id: ""
	I0826 12:13:33.358953  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.358961  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:33.358968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:33.359025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:33.394579  152982 cri.go:89] found id: ""
	I0826 12:13:33.394615  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.394623  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:33.394629  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:33.394700  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:33.429750  152982 cri.go:89] found id: ""
	I0826 12:13:33.429782  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.429794  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:33.429802  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:33.429863  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:33.465857  152982 cri.go:89] found id: ""
	I0826 12:13:33.465895  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.465908  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:33.465921  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:33.465939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:33.506312  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:33.506344  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:33.557235  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:33.557279  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:33.570259  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:33.570293  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:33.638927  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:33.638952  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:33.638973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:36.217153  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:36.230544  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:36.230630  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:36.283359  152982 cri.go:89] found id: ""
	I0826 12:13:36.283394  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.283405  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:36.283413  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:36.283486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:36.327991  152982 cri.go:89] found id: ""
	I0826 12:13:36.328017  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.328026  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:36.328031  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:36.328095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:36.380106  152982 cri.go:89] found id: ""
	I0826 12:13:36.380137  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.380147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:36.380154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:36.380212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:36.415844  152982 cri.go:89] found id: ""
	I0826 12:13:36.415872  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.415880  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:36.415886  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:36.415939  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:36.451058  152982 cri.go:89] found id: ""
	I0826 12:13:36.451131  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.451158  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:36.451168  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:36.451235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:36.485814  152982 cri.go:89] found id: ""
	I0826 12:13:36.485845  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.485856  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:36.485864  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:36.485943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:36.520811  152982 cri.go:89] found id: ""
	I0826 12:13:36.520848  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.520865  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:36.520876  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:36.520952  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:36.557835  152982 cri.go:89] found id: ""
	I0826 12:13:36.557866  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.557877  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:36.557897  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:36.557915  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:36.609551  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:36.609594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:36.624424  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:36.624453  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:36.697267  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:36.697294  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:36.697312  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:34.704284  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.203717  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.752752  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:38.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.622907  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.121820  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:36.781810  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:36.781862  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.326306  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:39.340161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:39.340229  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:39.373614  152982 cri.go:89] found id: ""
	I0826 12:13:39.373646  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.373655  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:39.373664  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:39.373732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:39.408021  152982 cri.go:89] found id: ""
	I0826 12:13:39.408059  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.408067  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:39.408073  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:39.408127  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:39.450503  152982 cri.go:89] found id: ""
	I0826 12:13:39.450531  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.450541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:39.450549  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:39.450624  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:39.487553  152982 cri.go:89] found id: ""
	I0826 12:13:39.487585  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.487596  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:39.487625  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:39.487695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:39.524701  152982 cri.go:89] found id: ""
	I0826 12:13:39.524734  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.524745  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:39.524753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:39.524822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:39.557863  152982 cri.go:89] found id: ""
	I0826 12:13:39.557893  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.557903  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:39.557911  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:39.557979  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:39.593456  152982 cri.go:89] found id: ""
	I0826 12:13:39.593486  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.593496  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:39.593504  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:39.593577  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:39.628444  152982 cri.go:89] found id: ""
	I0826 12:13:39.628472  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.628481  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:39.628490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:39.628503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.668929  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:39.668967  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:39.724948  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:39.725003  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:39.740014  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:39.740060  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:39.814786  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:39.814811  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:39.814828  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:39.704050  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:41.704769  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.752827  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.753423  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.122285  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.622043  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.393781  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:42.407529  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:42.407620  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:42.444273  152982 cri.go:89] found id: ""
	I0826 12:13:42.444305  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.444314  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:42.444321  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:42.444389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:42.478683  152982 cri.go:89] found id: ""
	I0826 12:13:42.478724  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.478734  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:42.478741  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:42.478803  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:42.520650  152982 cri.go:89] found id: ""
	I0826 12:13:42.520684  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.520708  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:42.520715  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:42.520774  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:42.558610  152982 cri.go:89] found id: ""
	I0826 12:13:42.558656  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.558667  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:42.558677  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:42.558750  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:42.593960  152982 cri.go:89] found id: ""
	I0826 12:13:42.593991  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.593999  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:42.594006  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:42.594064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:42.628257  152982 cri.go:89] found id: ""
	I0826 12:13:42.628284  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.628294  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:42.628300  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:42.628372  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:42.669894  152982 cri.go:89] found id: ""
	I0826 12:13:42.669933  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.669946  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:42.669956  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:42.670029  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:42.707893  152982 cri.go:89] found id: ""
	I0826 12:13:42.707923  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.707934  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:42.707946  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:42.707962  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:42.760778  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:42.760823  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:42.773718  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:42.773753  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:42.855780  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:42.855813  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:42.855831  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:42.934872  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:42.934925  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:45.473505  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:45.488485  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:45.488582  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:45.524355  152982 cri.go:89] found id: ""
	I0826 12:13:45.524387  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.524398  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:45.524407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:45.524474  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:45.563731  152982 cri.go:89] found id: ""
	I0826 12:13:45.563758  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.563767  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:45.563772  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:45.563832  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:45.595876  152982 cri.go:89] found id: ""
	I0826 12:13:45.595910  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.595918  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:45.595924  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:45.595977  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:45.629212  152982 cri.go:89] found id: ""
	I0826 12:13:45.629246  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.629256  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:45.629262  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:45.629316  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:45.662718  152982 cri.go:89] found id: ""
	I0826 12:13:45.662748  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.662759  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:45.662766  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:45.662851  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:45.697540  152982 cri.go:89] found id: ""
	I0826 12:13:45.697573  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.697585  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:45.697598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:45.697670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:45.738012  152982 cri.go:89] found id: ""
	I0826 12:13:45.738054  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.738067  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:45.738077  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:45.738174  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:45.778322  152982 cri.go:89] found id: ""
	I0826 12:13:45.778352  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.778364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:45.778376  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:45.778395  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:45.830530  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:45.830570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:45.845289  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:45.845335  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:45.918490  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:45.918514  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:45.918528  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:45.998762  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:45.998806  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:44.204527  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.204789  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.753605  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.754396  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.255176  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.622584  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.122691  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:48.540076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:48.554537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:48.554616  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:48.589750  152982 cri.go:89] found id: ""
	I0826 12:13:48.589783  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.589792  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:48.589799  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:48.589866  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.628496  152982 cri.go:89] found id: ""
	I0826 12:13:48.628530  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.628540  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:48.628557  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:48.628635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:48.670630  152982 cri.go:89] found id: ""
	I0826 12:13:48.670667  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.670678  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:48.670686  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:48.670756  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:48.707510  152982 cri.go:89] found id: ""
	I0826 12:13:48.707543  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.707564  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:48.707572  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:48.707642  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:48.752189  152982 cri.go:89] found id: ""
	I0826 12:13:48.752222  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.752231  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:48.752237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:48.752306  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:48.788294  152982 cri.go:89] found id: ""
	I0826 12:13:48.788332  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.788356  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:48.788364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:48.788439  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:48.822728  152982 cri.go:89] found id: ""
	I0826 12:13:48.822755  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.822765  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:48.822771  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:48.822850  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:48.859237  152982 cri.go:89] found id: ""
	I0826 12:13:48.859270  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.859280  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:48.859293  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:48.859310  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:48.944271  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:48.944322  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:48.983438  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:48.983477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:49.036463  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:49.036511  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:49.051081  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:49.051123  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:49.127953  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:51.629023  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:51.643644  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:51.643728  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:51.684273  152982 cri.go:89] found id: ""
	I0826 12:13:51.684310  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.684323  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:51.684331  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:51.684401  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.703794  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:50.703872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:52.705329  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.753669  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.252960  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.623221  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.121867  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.720561  152982 cri.go:89] found id: ""
	I0826 12:13:51.720600  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.720610  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:51.720616  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:51.720690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:51.758023  152982 cri.go:89] found id: ""
	I0826 12:13:51.758049  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.758057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:51.758063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:51.758123  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:51.797029  152982 cri.go:89] found id: ""
	I0826 12:13:51.797063  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.797075  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:51.797082  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:51.797150  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:51.832002  152982 cri.go:89] found id: ""
	I0826 12:13:51.832032  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.832043  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:51.832051  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:51.832122  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:51.867042  152982 cri.go:89] found id: ""
	I0826 12:13:51.867074  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.867083  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:51.867090  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:51.867155  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:51.904887  152982 cri.go:89] found id: ""
	I0826 12:13:51.904919  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.904931  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:51.904938  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:51.905005  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:51.940628  152982 cri.go:89] found id: ""
	I0826 12:13:51.940662  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.940674  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:51.940686  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:51.940703  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:51.979988  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:51.980021  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:52.033297  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:52.033338  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:52.047004  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:52.047039  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:52.126136  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:52.126163  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:52.126176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:54.711457  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:54.726419  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:54.726510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:54.773253  152982 cri.go:89] found id: ""
	I0826 12:13:54.773290  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.773304  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:54.773324  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:54.773397  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:54.812175  152982 cri.go:89] found id: ""
	I0826 12:13:54.812211  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.812232  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:54.812239  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:54.812298  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:54.848673  152982 cri.go:89] found id: ""
	I0826 12:13:54.848702  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.848710  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:54.848717  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:54.848782  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:54.884211  152982 cri.go:89] found id: ""
	I0826 12:13:54.884239  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.884252  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:54.884259  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:54.884329  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:54.925279  152982 cri.go:89] found id: ""
	I0826 12:13:54.925312  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.925323  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:54.925331  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:54.925406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:54.961004  152982 cri.go:89] found id: ""
	I0826 12:13:54.961035  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.961043  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:54.961050  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:54.961114  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:54.998689  152982 cri.go:89] found id: ""
	I0826 12:13:54.998720  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.998730  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:54.998737  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:54.998810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:55.033540  152982 cri.go:89] found id: ""
	I0826 12:13:55.033671  152982 logs.go:276] 0 containers: []
	W0826 12:13:55.033683  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:55.033696  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:55.033713  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:55.082966  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:55.083006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:55.096472  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:55.096503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:55.166868  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:55.166899  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:55.166917  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:55.260596  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:55.260637  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:55.206106  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.704214  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.253114  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.254749  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.122385  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.124183  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:00.622721  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.804727  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:57.818098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:57.818188  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:57.852777  152982 cri.go:89] found id: ""
	I0826 12:13:57.852819  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.852832  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:57.852841  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:57.852906  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:57.888778  152982 cri.go:89] found id: ""
	I0826 12:13:57.888815  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.888832  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:57.888840  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:57.888924  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:57.927398  152982 cri.go:89] found id: ""
	I0826 12:13:57.927432  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.927444  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:57.927452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:57.927527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:57.965373  152982 cri.go:89] found id: ""
	I0826 12:13:57.965402  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.965420  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:57.965425  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:57.965488  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:57.999334  152982 cri.go:89] found id: ""
	I0826 12:13:57.999366  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.999374  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:57.999380  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:57.999441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:58.035268  152982 cri.go:89] found id: ""
	I0826 12:13:58.035299  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.035308  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:58.035313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:58.035373  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:58.070055  152982 cri.go:89] found id: ""
	I0826 12:13:58.070088  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.070099  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:58.070107  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:58.070176  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:58.104845  152982 cri.go:89] found id: ""
	I0826 12:13:58.104882  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.104893  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:58.104906  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:58.104923  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:58.149392  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:58.149427  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:58.201310  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:58.201345  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:58.217027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:58.217067  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:58.301347  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.301372  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:58.301389  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:00.881924  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:00.897716  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:14:00.897804  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:14:00.934959  152982 cri.go:89] found id: ""
	I0826 12:14:00.934993  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.935005  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:14:00.935013  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:14:00.935086  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:14:00.969225  152982 cri.go:89] found id: ""
	I0826 12:14:00.969257  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.969266  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:14:00.969272  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:14:00.969344  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:14:01.004010  152982 cri.go:89] found id: ""
	I0826 12:14:01.004047  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.004057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:14:01.004063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:14:01.004136  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:14:01.039659  152982 cri.go:89] found id: ""
	I0826 12:14:01.039689  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.039697  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:14:01.039704  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:14:01.039758  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:14:01.073234  152982 cri.go:89] found id: ""
	I0826 12:14:01.073266  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.073278  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:14:01.073293  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:14:01.073370  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:14:01.111187  152982 cri.go:89] found id: ""
	I0826 12:14:01.111229  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.111243  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:14:01.111261  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:14:01.111331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:14:01.145754  152982 cri.go:89] found id: ""
	I0826 12:14:01.145791  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.145803  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:14:01.145811  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:14:01.145885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:14:01.182342  152982 cri.go:89] found id: ""
	I0826 12:14:01.182386  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.182398  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:14:01.182412  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:14:01.182434  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:01.266710  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:14:01.266754  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:14:01.305346  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:14:01.305385  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:14:01.356704  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:14:01.356745  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:14:01.370117  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:14:01.370149  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:14:01.440661  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.198044  152550 pod_ready.go:82] duration metric: took 4m0.000989551s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	E0826 12:13:58.198094  152550 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:13:58.198117  152550 pod_ready.go:39] duration metric: took 4m12.634931094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:13:58.198155  152550 kubeadm.go:597] duration metric: took 4m20.008849713s to restartPrimaryControlPlane
	W0826 12:13:58.198303  152550 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:13:58.198455  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:00.756478  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.253496  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.941691  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:03.956386  152982 kubeadm.go:597] duration metric: took 4m3.440941217s to restartPrimaryControlPlane
	W0826 12:14:03.956466  152982 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:03.956493  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:04.426489  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:04.441881  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:04.452877  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:04.463304  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:04.463332  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:04.463380  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:04.473208  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:04.473290  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:04.483666  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:04.494051  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:04.494177  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:04.504320  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.514099  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:04.514174  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.524235  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:04.533899  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:04.533984  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:04.544851  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:04.618397  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:14:04.618498  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:04.760383  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:04.760547  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:04.760690  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:14:04.953284  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:02.622852  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:05.122408  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:04.955371  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:04.955481  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:04.955563  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:04.955664  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:04.955738  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:04.955850  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:04.955953  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:04.956047  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:04.956133  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:04.956239  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:04.956306  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:04.956366  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:04.956455  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:05.401019  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:05.543601  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:05.641242  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:05.716524  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:05.737543  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:05.739428  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:05.739530  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:05.887203  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:05.889144  152982 out.go:235]   - Booting up control plane ...
	I0826 12:14:05.889288  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:05.891248  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:05.892518  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:05.894610  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:05.899134  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:14:05.753455  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.754033  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.622166  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:09.623006  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:10.253568  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.255058  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.122796  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.622774  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.753807  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.253632  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.254808  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.123304  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.622567  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.257450  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.752912  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.623069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.624561  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.253685  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.752880  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.122470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.623195  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:29.414342  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.215853526s)
	I0826 12:14:29.414450  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:29.436730  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:29.449421  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:29.462320  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:29.462349  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:29.462411  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:29.473119  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:29.473189  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:29.493795  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:29.516473  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:29.516563  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:29.528887  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.537934  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:29.538011  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.548384  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:29.557588  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:29.557659  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:29.567544  152550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:29.611274  152550 kubeadm.go:310] W0826 12:14:29.589660    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.612346  152550 kubeadm.go:310] W0826 12:14:29.590990    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.731352  152550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:14:30.755803  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.252679  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:31.123036  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.623654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:35.623993  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:38.120098  152550 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:14:38.120187  152550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:38.120283  152550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:38.120428  152550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:38.120548  152550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:14:38.120643  152550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:38.122417  152550 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:38.122519  152550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:38.122590  152550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:38.122681  152550 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:38.122766  152550 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:38.122884  152550 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:38.122960  152550 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:38.123047  152550 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:38.123146  152550 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:38.123242  152550 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:38.123316  152550 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:38.123350  152550 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:38.123394  152550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:38.123481  152550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:38.123531  152550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:14:38.123602  152550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:38.123656  152550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:38.123702  152550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:38.123770  152550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:38.123830  152550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:38.126005  152550 out.go:235]   - Booting up control plane ...
	I0826 12:14:38.126111  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:38.126209  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:38.126293  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:38.126433  152550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:38.126541  152550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:38.126619  152550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:38.126796  152550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:14:38.126975  152550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:14:38.127064  152550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001663066s
	I0826 12:14:38.127156  152550 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:14:38.127239  152550 kubeadm.go:310] [api-check] The API server is healthy after 4.502197821s
	I0826 12:14:38.127376  152550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:14:38.127527  152550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:14:38.127622  152550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:14:38.127799  152550 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-923586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:14:38.127882  152550 kubeadm.go:310] [bootstrap-token] Using token: uk5nes.r9l047sx2ciq7ja8
	I0826 12:14:38.129135  152550 out.go:235]   - Configuring RBAC rules ...
	I0826 12:14:38.129255  152550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:14:38.129363  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:14:38.129493  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:14:38.129668  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:14:38.129810  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:14:38.129908  152550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:14:38.130016  152550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:14:38.130071  152550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:14:38.130114  152550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:14:38.130120  152550 kubeadm.go:310] 
	I0826 12:14:38.130173  152550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:14:38.130178  152550 kubeadm.go:310] 
	I0826 12:14:38.130239  152550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:14:38.130249  152550 kubeadm.go:310] 
	I0826 12:14:38.130269  152550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:14:38.130340  152550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:14:38.130414  152550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:14:38.130424  152550 kubeadm.go:310] 
	I0826 12:14:38.130501  152550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:14:38.130515  152550 kubeadm.go:310] 
	I0826 12:14:38.130583  152550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:14:38.130595  152550 kubeadm.go:310] 
	I0826 12:14:38.130676  152550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:14:38.130774  152550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:14:38.130889  152550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:14:38.130898  152550 kubeadm.go:310] 
	I0826 12:14:38.130984  152550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:14:38.131067  152550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:14:38.131086  152550 kubeadm.go:310] 
	I0826 12:14:38.131158  152550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131276  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:14:38.131297  152550 kubeadm.go:310] 	--control-plane 
	I0826 12:14:38.131301  152550 kubeadm.go:310] 
	I0826 12:14:38.131407  152550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:14:38.131419  152550 kubeadm.go:310] 
	I0826 12:14:38.131518  152550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131634  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:14:38.131651  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:14:38.131664  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:14:38.133846  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:14:35.752863  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.752967  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.116222  153366 pod_ready.go:82] duration metric: took 4m0.000438014s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	E0826 12:14:37.116261  153366 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:14:37.116289  153366 pod_ready.go:39] duration metric: took 4m10.542468189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:37.116344  153366 kubeadm.go:597] duration metric: took 4m19.458712933s to restartPrimaryControlPlane
	W0826 12:14:37.116458  153366 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:37.116493  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:38.135291  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:14:38.146512  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:14:38.165564  152550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:14:38.165694  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.165744  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-923586 minikube.k8s.io/updated_at=2024_08_26T12_14_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=embed-certs-923586 minikube.k8s.io/primary=true
	I0826 12:14:38.409452  152550 ops.go:34] apiserver oom_adj: -16
	I0826 12:14:38.409559  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.910300  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.410434  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.909691  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.410601  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.910375  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.410502  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.909663  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.409954  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.515793  152550 kubeadm.go:1113] duration metric: took 4.350161994s to wait for elevateKubeSystemPrivileges
	I0826 12:14:42.515834  152550 kubeadm.go:394] duration metric: took 5m4.371327443s to StartCluster
	I0826 12:14:42.515878  152550 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.515970  152550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:14:42.517781  152550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.518064  152550 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:14:42.518189  152550 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:14:42.518281  152550 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-923586"
	I0826 12:14:42.518296  152550 addons.go:69] Setting default-storageclass=true in profile "embed-certs-923586"
	I0826 12:14:42.518309  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:14:42.518339  152550 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-923586"
	W0826 12:14:42.518352  152550 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:14:42.518362  152550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-923586"
	I0826 12:14:42.518383  152550 addons.go:69] Setting metrics-server=true in profile "embed-certs-923586"
	I0826 12:14:42.518405  152550 addons.go:234] Setting addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:42.518409  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	W0826 12:14:42.518418  152550 addons.go:243] addon metrics-server should already be in state true
	I0826 12:14:42.518446  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.518852  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518865  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518829  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518905  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.519968  152550 out.go:177] * Verifying Kubernetes components...
	I0826 12:14:42.521761  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:14:42.537559  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0826 12:14:42.538127  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.538827  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.538891  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.539336  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.539636  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.540538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0826 12:14:42.540644  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0826 12:14:42.541179  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541244  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541681  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541695  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.541834  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541842  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.542936  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.542979  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.543441  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543490  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543551  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543577  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543637  152550 addons.go:234] Setting addon default-storageclass=true in "embed-certs-923586"
	W0826 12:14:42.543663  152550 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:14:42.543700  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.544040  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.544067  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.561871  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0826 12:14:42.562432  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.562957  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.562971  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.563394  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.563689  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.565675  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.565857  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0826 12:14:42.565980  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0826 12:14:42.566268  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566352  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566799  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.566815  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567209  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567364  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.567386  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567775  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567779  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.567855  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.567903  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.568183  152550 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:14:42.569717  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.569832  152550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.569854  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:14:42.569876  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.571655  152550 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:14:42.572951  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.572975  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:14:42.572988  152550 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:14:42.573009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.573393  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.573434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.573818  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.574020  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.574160  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.574454  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.576356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.576762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.576782  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.577099  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.577293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.577430  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.577564  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.586538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0826 12:14:42.587087  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.587574  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.587590  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.587849  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.588001  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.589835  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.590061  152550 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.590075  152550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:14:42.590089  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.592573  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.592861  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.592952  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.593269  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.593437  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.593541  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.593637  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.772651  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:14:42.795921  152550 node_ready.go:35] waiting up to 6m0s for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831853  152550 node_ready.go:49] node "embed-certs-923586" has status "Ready":"True"
	I0826 12:14:42.831881  152550 node_ready.go:38] duration metric: took 35.920093ms for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831893  152550 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:42.856949  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:42.924562  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.940640  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:14:42.940669  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:14:42.958680  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.975446  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:14:42.975481  152550 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:14:43.037862  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:43.037891  152550 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:14:43.105738  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:44.054921  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130312138s)
	I0826 12:14:44.054995  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055025  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096305238s)
	I0826 12:14:44.055070  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055087  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055330  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055394  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055408  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055416  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055423  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055444  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055395  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055498  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055512  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055520  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055719  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055724  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055734  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055858  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055898  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055923  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.075068  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.075100  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.075404  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.075424  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478321  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.372540463s)
	I0826 12:14:44.478382  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478402  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.478806  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.478864  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.478876  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478891  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478904  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.479161  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.479161  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.479189  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.479205  152550 addons.go:475] Verifying addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:44.482190  152550 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:14:40.254480  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:42.753499  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:45.900198  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:14:45.901204  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:45.901550  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:44.483577  152550 addons.go:510] duration metric: took 1.965385921s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:14:44.876221  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:44.876253  152550 pod_ready.go:82] duration metric: took 2.019275302s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.876270  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883514  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.883542  152550 pod_ready.go:82] duration metric: took 1.007263784s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883553  152550 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890724  152550 pod_ready.go:93] pod "etcd-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.890750  152550 pod_ready.go:82] duration metric: took 7.190212ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890760  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.754815  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.252702  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:49.254411  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.897138  152550 pod_ready.go:103] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:48.897502  152550 pod_ready.go:93] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:48.897529  152550 pod_ready.go:82] duration metric: took 3.006762275s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:48.897541  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905832  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.905858  152550 pod_ready.go:82] duration metric: took 2.008310051s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905870  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912639  152550 pod_ready.go:93] pod "kube-proxy-xnv2b" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.912679  152550 pod_ready.go:82] duration metric: took 6.793285ms for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912694  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918794  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.918819  152550 pod_ready.go:82] duration metric: took 6.117525ms for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918826  152550 pod_ready.go:39] duration metric: took 8.086922463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:50.918867  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:14:50.918928  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:50.936095  152550 api_server.go:72] duration metric: took 8.41799252s to wait for apiserver process to appear ...
	I0826 12:14:50.936126  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:14:50.936155  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:14:50.941142  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:14:50.942612  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:14:50.942653  152550 api_server.go:131] duration metric: took 6.519342ms to wait for apiserver health ...
	I0826 12:14:50.942664  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:14:50.947646  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:14:50.947675  152550 system_pods.go:61] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:50.947680  152550 system_pods.go:61] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:50.947684  152550 system_pods.go:61] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:50.947688  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:50.947691  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:50.947694  152550 system_pods.go:61] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:50.947699  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:50.947705  152550 system_pods.go:61] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:50.947709  152550 system_pods.go:61] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:50.947717  152550 system_pods.go:74] duration metric: took 5.046771ms to wait for pod list to return data ...
	I0826 12:14:50.947723  152550 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:14:50.950716  152550 default_sa.go:45] found service account: "default"
	I0826 12:14:50.950744  152550 default_sa.go:55] duration metric: took 3.014513ms for default service account to be created ...
	I0826 12:14:50.950756  152550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:14:51.063812  152550 system_pods.go:86] 9 kube-system pods found
	I0826 12:14:51.063849  152550 system_pods.go:89] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:51.063858  152550 system_pods.go:89] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:51.063864  152550 system_pods.go:89] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:51.063869  152550 system_pods.go:89] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:51.063875  152550 system_pods.go:89] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:51.063880  152550 system_pods.go:89] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:51.063886  152550 system_pods.go:89] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:51.063894  152550 system_pods.go:89] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:51.063901  152550 system_pods.go:89] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:51.063914  152550 system_pods.go:126] duration metric: took 113.151196ms to wait for k8s-apps to be running ...
	I0826 12:14:51.063925  152550 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:14:51.063978  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:51.079783  152550 system_svc.go:56] duration metric: took 15.845401ms WaitForService to wait for kubelet
	I0826 12:14:51.079821  152550 kubeadm.go:582] duration metric: took 8.56172531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:14:51.079848  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:14:51.262166  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:14:51.262194  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:14:51.262233  152550 node_conditions.go:105] duration metric: took 182.377973ms to run NodePressure ...
	I0826 12:14:51.262248  152550 start.go:241] waiting for startup goroutines ...
	I0826 12:14:51.262258  152550 start.go:246] waiting for cluster config update ...
	I0826 12:14:51.262272  152550 start.go:255] writing updated cluster config ...
	I0826 12:14:51.262587  152550 ssh_runner.go:195] Run: rm -f paused
	I0826 12:14:51.317881  152550 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:14:51.319950  152550 out.go:177] * Done! kubectl is now configured to use "embed-certs-923586" cluster and "default" namespace by default
	I0826 12:14:50.901903  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:50.902179  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:51.256756  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:53.755801  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:56.253848  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:58.254315  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:00.902494  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:00.902754  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:03.257214  153366 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140694693s)
	I0826 12:15:03.257298  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:03.273530  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:03.284370  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:03.294199  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:03.294221  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:03.294270  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:15:03.303856  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:03.303938  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:03.313935  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:15:03.323395  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:03.323477  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:03.333728  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.343369  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:03.343452  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.353456  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:15:03.363384  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:03.363472  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:03.373738  153366 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:03.422068  153366 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:03.422173  153366 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:03.535516  153366 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:03.535649  153366 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:03.535775  153366 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:03.550873  153366 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:03.552861  153366 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:03.552969  153366 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:03.553038  153366 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:03.553138  153366 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:03.553218  153366 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:03.553319  153366 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:03.553385  153366 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:03.553462  153366 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:03.553536  153366 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:03.553674  153366 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:03.553810  153366 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:03.553854  153366 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:03.553906  153366 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:03.650986  153366 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:03.737989  153366 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:03.981919  153366 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:04.322809  153366 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:04.378495  153366 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:04.379108  153366 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:04.382061  153366 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:00.753091  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:02.753181  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:04.384093  153366 out.go:235]   - Booting up control plane ...
	I0826 12:15:04.384215  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:04.384313  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:04.384401  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:04.405533  153366 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:04.411925  153366 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:04.411998  153366 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:04.548438  153366 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:04.548626  153366 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:05.049451  153366 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.31099ms
	I0826 12:15:05.049526  153366 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:05.253970  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:07.753555  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.051568  153366 kubeadm.go:310] [api-check] The API server is healthy after 5.001973036s
	I0826 12:15:10.066691  153366 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:10.086381  153366 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:10.122144  153366 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:10.122349  153366 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-697869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:10.138374  153366 kubeadm.go:310] [bootstrap-token] Using token: amrfa7.mjk6u0x9vle6unng
	I0826 12:15:10.139885  153366 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:10.140032  153366 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:10.156541  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:10.167826  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:10.174587  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:10.179100  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:10.191798  153366 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:10.465168  153366 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:10.905160  153366 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:11.461111  153366 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:11.461144  153366 kubeadm.go:310] 
	I0826 12:15:11.461234  153366 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:11.461246  153366 kubeadm.go:310] 
	I0826 12:15:11.461381  153366 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:11.461404  153366 kubeadm.go:310] 
	I0826 12:15:11.461439  153366 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:11.461530  153366 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:11.461655  153366 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:11.461667  153366 kubeadm.go:310] 
	I0826 12:15:11.461761  153366 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:11.461776  153366 kubeadm.go:310] 
	I0826 12:15:11.461841  153366 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:11.461855  153366 kubeadm.go:310] 
	I0826 12:15:11.461951  153366 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:11.462070  153366 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:11.462171  153366 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:11.462181  153366 kubeadm.go:310] 
	I0826 12:15:11.462305  153366 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:11.462432  153366 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:11.462443  153366 kubeadm.go:310] 
	I0826 12:15:11.462557  153366 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.462694  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:11.462729  153366 kubeadm.go:310] 	--control-plane 
	I0826 12:15:11.462742  153366 kubeadm.go:310] 
	I0826 12:15:11.462862  153366 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:11.462879  153366 kubeadm.go:310] 
	I0826 12:15:11.463004  153366 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.463151  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:11.463695  153366 kubeadm.go:310] W0826 12:15:03.397375    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464127  153366 kubeadm.go:310] W0826 12:15:03.398283    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464277  153366 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:11.464314  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:15:11.464324  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:11.467369  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:09.754135  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.247470  152463 pod_ready.go:82] duration metric: took 4m0.000930829s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	E0826 12:15:10.247510  152463 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:15:10.247531  152463 pod_ready.go:39] duration metric: took 4m13.959337221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:10.247571  152463 kubeadm.go:597] duration metric: took 4m20.649627423s to restartPrimaryControlPlane
	W0826 12:15:10.247641  152463 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:15:10.247671  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:15:11.468809  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:11.480030  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:11.503412  153366 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:11.503518  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:11.503558  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-697869 minikube.k8s.io/updated_at=2024_08_26T12_15_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=default-k8s-diff-port-697869 minikube.k8s.io/primary=true
	I0826 12:15:11.724406  153366 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:11.724524  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.225088  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.725598  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.225161  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.724619  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.225467  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.724756  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.224733  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.724555  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.869377  153366 kubeadm.go:1113] duration metric: took 4.365927713s to wait for elevateKubeSystemPrivileges
	I0826 12:15:15.869426  153366 kubeadm.go:394] duration metric: took 4m58.261516694s to StartCluster
	I0826 12:15:15.869450  153366 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.869547  153366 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:15.872248  153366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.872615  153366 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:15.872724  153366 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:15.872819  153366 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872837  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:15.872839  153366 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872858  153366 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872872  153366 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:15.872887  153366 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872908  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872919  153366 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872927  153366 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:15.872959  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872890  153366 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-697869"
	I0826 12:15:15.873361  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873403  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873418  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873465  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.874128  153366 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:15.875341  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:15.894326  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0826 12:15:15.894578  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0826 12:15:15.895050  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895104  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38885
	I0826 12:15:15.895131  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895609  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895629  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895612  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895658  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895696  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.896010  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896059  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896145  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.896164  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.896261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.896493  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896650  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.896675  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.896977  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.897022  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.899881  153366 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.899904  153366 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:15.899935  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.900218  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.900255  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.914959  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0826 12:15:15.915525  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.915993  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.916017  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.916418  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.916451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0826 12:15:15.916588  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.916681  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0826 12:15:15.916999  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.917629  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.917643  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.918129  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.918298  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.918337  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.919305  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.919920  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.919947  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.920096  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.920226  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.920281  153366 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:15.920702  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.920724  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.921464  153366 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:15.921468  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:15.921554  153366 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:15.921575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.923028  153366 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:15.923051  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:15.923072  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.926224  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926877  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926895  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926900  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.927101  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927141  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927313  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927329  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927509  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927677  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.927774  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.945639  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0826 12:15:15.946164  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.946704  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.946726  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.947148  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.947420  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.949257  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.949524  153366 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:15.949544  153366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:15.949573  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.952861  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953407  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.953440  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953604  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.953816  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.953971  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.954108  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:16.119775  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:16.141629  153366 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167775  153366 node_ready.go:49] node "default-k8s-diff-port-697869" has status "Ready":"True"
	I0826 12:15:16.167813  153366 node_ready.go:38] duration metric: took 26.141251ms for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167823  153366 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:16.174824  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:16.265371  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:16.273443  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:16.273479  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:16.295175  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:16.301027  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:16.301063  153366 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:16.351346  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:16.351372  153366 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:16.536263  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:17.254787  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254820  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.254872  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254896  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255317  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255371  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255394  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255396  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255397  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255354  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255412  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255447  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255425  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255497  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255721  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255735  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255839  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255860  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255883  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.279566  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.279589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.279893  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.279914  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792266  153366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255954534s)
	I0826 12:15:17.792329  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792341  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792687  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.792714  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792727  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792737  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792693  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.793052  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.793070  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.793083  153366 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-697869"
	I0826 12:15:17.795156  153366 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:15:17.796583  153366 addons.go:510] duration metric: took 1.923858399s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:15:18.183088  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.682427  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.903394  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:20.903620  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:21.684011  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.684037  153366 pod_ready.go:82] duration metric: took 5.509158352s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.684047  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689145  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.689170  153366 pod_ready.go:82] duration metric: took 5.117406ms for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689180  153366 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695856  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.695897  153366 pod_ready.go:82] duration metric: took 2.006709056s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695912  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700548  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.700572  153366 pod_ready.go:82] duration metric: took 4.650988ms for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700583  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705425  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.705449  153366 pod_ready.go:82] duration metric: took 4.857442ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705461  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710336  153366 pod_ready.go:93] pod "kube-proxy-fkklg" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.710368  153366 pod_ready.go:82] duration metric: took 4.897388ms for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710380  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079760  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:24.079791  153366 pod_ready.go:82] duration metric: took 369.402007ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079803  153366 pod_ready.go:39] duration metric: took 7.911968599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:24.079826  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:24.079905  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:24.096351  153366 api_server.go:72] duration metric: took 8.22368917s to wait for apiserver process to appear ...
	I0826 12:15:24.096380  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:24.096401  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:15:24.100636  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:15:24.102197  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:24.102228  153366 api_server.go:131] duration metric: took 5.839895ms to wait for apiserver health ...
	I0826 12:15:24.102239  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:24.282080  153366 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:24.282111  153366 system_pods.go:61] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.282116  153366 system_pods.go:61] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.282120  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.282124  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.282128  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.282131  153366 system_pods.go:61] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.282134  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.282141  153366 system_pods.go:61] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.282148  153366 system_pods.go:61] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.282160  153366 system_pods.go:74] duration metric: took 179.913782ms to wait for pod list to return data ...
	I0826 12:15:24.282174  153366 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:24.478697  153366 default_sa.go:45] found service account: "default"
	I0826 12:15:24.478725  153366 default_sa.go:55] duration metric: took 196.543227ms for default service account to be created ...
	I0826 12:15:24.478735  153366 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:24.681990  153366 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:24.682024  153366 system_pods.go:89] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.682033  153366 system_pods.go:89] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.682039  153366 system_pods.go:89] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.682047  153366 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.682053  153366 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.682059  153366 system_pods.go:89] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.682064  153366 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.682074  153366 system_pods.go:89] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.682084  153366 system_pods.go:89] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.682099  153366 system_pods.go:126] duration metric: took 203.358223ms to wait for k8s-apps to be running ...
	I0826 12:15:24.682112  153366 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:24.682176  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:24.696733  153366 system_svc.go:56] duration metric: took 14.61027ms WaitForService to wait for kubelet
	I0826 12:15:24.696763  153366 kubeadm.go:582] duration metric: took 8.824109304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:24.696783  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:24.879924  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:24.879956  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:24.879966  153366 node_conditions.go:105] duration metric: took 183.178992ms to run NodePressure ...
	I0826 12:15:24.879990  153366 start.go:241] waiting for startup goroutines ...
	I0826 12:15:24.879997  153366 start.go:246] waiting for cluster config update ...
	I0826 12:15:24.880010  153366 start.go:255] writing updated cluster config ...
	I0826 12:15:24.880311  153366 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:24.930941  153366 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:24.933196  153366 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-697869" cluster and "default" namespace by default
	I0826 12:15:36.323870  152463 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.076163509s)
	I0826 12:15:36.323965  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:36.347973  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:36.368968  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:36.382879  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:36.382903  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:36.382963  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:15:36.416659  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:36.416743  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:36.429514  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:15:36.451301  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:36.451385  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:36.462051  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.472004  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:36.472067  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.482273  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:15:36.492841  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:36.492912  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:36.504817  152463 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:36.551754  152463 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:36.551829  152463 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:36.672687  152463 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:36.672864  152463 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:36.672989  152463 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:36.683235  152463 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:36.685324  152463 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:36.685440  152463 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:36.685547  152463 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:36.685629  152463 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:36.685682  152463 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:36.685739  152463 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:36.685783  152463 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:36.685831  152463 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:36.686022  152463 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:36.686468  152463 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:36.686945  152463 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:36.687303  152463 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:36.687378  152463 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:36.967134  152463 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:37.077904  152463 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:37.371185  152463 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:37.555065  152463 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:37.634464  152463 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:37.634927  152463 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:37.638560  152463 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:37.640588  152463 out.go:235]   - Booting up control plane ...
	I0826 12:15:37.640726  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:37.640832  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:37.642937  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:37.662774  152463 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:37.672492  152463 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:37.672548  152463 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:37.813958  152463 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:37.814108  152463 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:38.316718  152463 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.741081ms
	I0826 12:15:38.316861  152463 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:43.318178  152463 kubeadm.go:310] [api-check] The API server is healthy after 5.001355764s
	I0826 12:15:43.331536  152463 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:43.349535  152463 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:43.387824  152463 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:43.388114  152463 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-956479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:43.405027  152463 kubeadm.go:310] [bootstrap-token] Using token: ukbhjp.blg8kbhpg1wwmixs
	I0826 12:15:43.406880  152463 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:43.407022  152463 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:43.422870  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:43.436842  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:43.444123  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:43.454773  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:43.467173  152463 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:43.727266  152463 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:44.155916  152463 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:44.726922  152463 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:44.727276  152463 kubeadm.go:310] 
	I0826 12:15:44.727355  152463 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:44.727366  152463 kubeadm.go:310] 
	I0826 12:15:44.727452  152463 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:44.727461  152463 kubeadm.go:310] 
	I0826 12:15:44.727501  152463 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:44.727596  152463 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:44.727678  152463 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:44.727692  152463 kubeadm.go:310] 
	I0826 12:15:44.727778  152463 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:44.727803  152463 kubeadm.go:310] 
	I0826 12:15:44.727880  152463 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:44.727890  152463 kubeadm.go:310] 
	I0826 12:15:44.727958  152463 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:44.728059  152463 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:44.728157  152463 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:44.728170  152463 kubeadm.go:310] 
	I0826 12:15:44.728278  152463 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:44.728381  152463 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:44.728390  152463 kubeadm.go:310] 
	I0826 12:15:44.728500  152463 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.728621  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:44.728650  152463 kubeadm.go:310] 	--control-plane 
	I0826 12:15:44.728655  152463 kubeadm.go:310] 
	I0826 12:15:44.728763  152463 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:44.728773  152463 kubeadm.go:310] 
	I0826 12:15:44.728879  152463 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.729000  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:44.730448  152463 kubeadm.go:310] W0826 12:15:36.526674    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730826  152463 kubeadm.go:310] W0826 12:15:36.527559    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730958  152463 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:44.730985  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:15:44.731006  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:44.732918  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:44.734123  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:44.746466  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:44.766371  152463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:44.766444  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:44.766500  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-956479 minikube.k8s.io/updated_at=2024_08_26T12_15_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=no-preload-956479 minikube.k8s.io/primary=true
	I0826 12:15:44.816160  152463 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:44.979504  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.479661  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.980448  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.479729  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.980060  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.479789  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.980142  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.479669  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.567890  152463 kubeadm.go:1113] duration metric: took 3.801513957s to wait for elevateKubeSystemPrivileges
	I0826 12:15:48.567928  152463 kubeadm.go:394] duration metric: took 4m59.024259276s to StartCluster
	I0826 12:15:48.567954  152463 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.568058  152463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:48.569638  152463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.569928  152463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:48.570009  152463 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:48.570072  152463 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956479"
	I0826 12:15:48.570106  152463 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956479"
	W0826 12:15:48.570120  152463 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:48.570111  152463 addons.go:69] Setting default-storageclass=true in profile "no-preload-956479"
	I0826 12:15:48.570136  152463 addons.go:69] Setting metrics-server=true in profile "no-preload-956479"
	I0826 12:15:48.570154  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570164  152463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956479"
	I0826 12:15:48.570168  152463 addons.go:234] Setting addon metrics-server=true in "no-preload-956479"
	W0826 12:15:48.570179  152463 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:48.570189  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:48.570209  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570485  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570551  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570575  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570609  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570621  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570654  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.572265  152463 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:48.573970  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:48.587085  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0826 12:15:48.587132  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0826 12:15:48.587291  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0826 12:15:48.587551  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.587597  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588312  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588331  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588376  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588491  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588509  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588696  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588878  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588965  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588978  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.589237  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589273  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589402  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589427  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589780  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.590142  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.593429  152463 addons.go:234] Setting addon default-storageclass=true in "no-preload-956479"
	W0826 12:15:48.593450  152463 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:48.593479  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.593765  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.593796  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.606920  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0826 12:15:48.607123  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0826 12:15:48.607641  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.607775  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.608233  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608253  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608389  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608401  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608881  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609068  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.609126  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609286  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.611449  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0826 12:15:48.611638  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612161  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612164  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.612932  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.612954  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.613327  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.613815  152463 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:48.614020  152463 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:48.614913  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.614969  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.615993  152463 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:48.616019  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:48.616035  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.616812  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:48.616831  152463 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:48.616854  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.619999  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.620553  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.620591  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.621629  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.621699  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621845  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.621868  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621914  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622126  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.622296  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.622459  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622662  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.622728  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.633310  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0826 12:15:48.633834  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.634438  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.634492  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.634892  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.635131  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.636967  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.637184  152463 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.637204  152463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:48.637225  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.640306  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.640677  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.640710  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.641042  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.641260  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.641483  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.641743  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.771258  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:48.788808  152463 node_ready.go:35] waiting up to 6m0s for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800881  152463 node_ready.go:49] node "no-preload-956479" has status "Ready":"True"
	I0826 12:15:48.800916  152463 node_ready.go:38] duration metric: took 12.068483ms for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800926  152463 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:48.806760  152463 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:48.859878  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:48.859902  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:48.863874  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.884910  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:48.884940  152463 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:48.905108  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.905139  152463 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:48.929466  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.968025  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:49.143607  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.143634  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.143980  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.144039  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144048  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144056  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.144063  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.144396  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144421  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144399  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177127  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.177157  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.177586  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177590  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.177610  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170421  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240899569s)
	I0826 12:15:50.170493  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170509  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.170879  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.170896  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.170919  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170934  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170947  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.171212  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.171232  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.171278  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.171294  152463 addons.go:475] Verifying addon metrics-server=true in "no-preload-956479"
	I0826 12:15:50.240347  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.272272683s)
	I0826 12:15:50.240403  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240416  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.240837  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.240861  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.240867  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.240871  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240906  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.241192  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.241208  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.243352  152463 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0826 12:15:50.244857  152463 addons.go:510] duration metric: took 1.674848626s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0826 12:15:50.821689  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:53.313148  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:54.313605  152463 pod_ready.go:93] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:54.313634  152463 pod_ready.go:82] duration metric: took 5.506845108s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:54.313646  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.320782  152463 pod_ready.go:103] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:56.822596  152463 pod_ready.go:93] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.822626  152463 pod_ready.go:82] duration metric: took 2.508972184s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.822652  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829472  152463 pod_ready.go:93] pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.829497  152463 pod_ready.go:82] duration metric: took 6.836827ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829508  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835063  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.835087  152463 pod_ready.go:82] duration metric: took 5.573211ms for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835095  152463 pod_ready.go:39] duration metric: took 8.03415934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:56.835111  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:56.835162  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:56.852565  152463 api_server.go:72] duration metric: took 8.282599518s to wait for apiserver process to appear ...
	I0826 12:15:56.852595  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:56.852614  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:15:56.857431  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:15:56.858525  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:56.858548  152463 api_server.go:131] duration metric: took 5.945927ms to wait for apiserver health ...
	I0826 12:15:56.858556  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:56.863726  152463 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:56.863750  152463 system_pods.go:61] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.863757  152463 system_pods.go:61] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.863762  152463 system_pods.go:61] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.863768  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.863773  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.863776  152463 system_pods.go:61] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.863780  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.863784  152463 system_pods.go:61] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.863788  152463 system_pods.go:61] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.863794  152463 system_pods.go:74] duration metric: took 5.233096ms to wait for pod list to return data ...
	I0826 12:15:56.863801  152463 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:56.866245  152463 default_sa.go:45] found service account: "default"
	I0826 12:15:56.866263  152463 default_sa.go:55] duration metric: took 2.456594ms for default service account to be created ...
	I0826 12:15:56.866270  152463 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:56.870592  152463 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:56.870614  152463 system_pods.go:89] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.870621  152463 system_pods.go:89] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.870626  152463 system_pods.go:89] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.870634  152463 system_pods.go:89] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.870640  152463 system_pods.go:89] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.870645  152463 system_pods.go:89] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.870656  152463 system_pods.go:89] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.870663  152463 system_pods.go:89] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.870673  152463 system_pods.go:89] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.870681  152463 system_pods.go:126] duration metric: took 4.405758ms to wait for k8s-apps to be running ...
	I0826 12:15:56.870688  152463 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:56.870736  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:56.886533  152463 system_svc.go:56] duration metric: took 15.833026ms WaitForService to wait for kubelet
	I0826 12:15:56.886582  152463 kubeadm.go:582] duration metric: took 8.316620619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:56.886607  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:56.895864  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:56.895902  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:56.895917  152463 node_conditions.go:105] duration metric: took 9.302123ms to run NodePressure ...
	I0826 12:15:56.895934  152463 start.go:241] waiting for startup goroutines ...
	I0826 12:15:56.895945  152463 start.go:246] waiting for cluster config update ...
	I0826 12:15:56.895960  152463 start.go:255] writing updated cluster config ...
	I0826 12:15:56.896336  152463 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:56.947198  152463 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:56.949119  152463 out.go:177] * Done! kubectl is now configured to use "no-preload-956479" cluster and "default" namespace by default
	I0826 12:16:00.905372  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:00.905692  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:00.905720  152982 kubeadm.go:310] 
	I0826 12:16:00.905753  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:16:00.905784  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:16:00.905791  152982 kubeadm.go:310] 
	I0826 12:16:00.905819  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:16:00.905877  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:16:00.906033  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:16:00.906050  152982 kubeadm.go:310] 
	I0826 12:16:00.906190  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:16:00.906257  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:16:00.906304  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:16:00.906311  152982 kubeadm.go:310] 
	I0826 12:16:00.906444  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:16:00.906687  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:16:00.906700  152982 kubeadm.go:310] 
	I0826 12:16:00.906794  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:16:00.906945  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:16:00.907050  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:16:00.907167  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:16:00.907184  152982 kubeadm.go:310] 
	I0826 12:16:00.907768  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:16:00.907869  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:16:00.907959  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 12:16:00.908103  152982 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 12:16:00.908168  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:16:01.392633  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:16:01.408303  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:16:01.419069  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:16:01.419104  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:16:01.419162  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:16:01.429440  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:16:01.429513  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:16:01.440092  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:16:01.450451  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:16:01.450528  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:16:01.461166  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.472084  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:16:01.472155  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.482791  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:16:01.493636  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:16:01.493737  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:16:01.504679  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:16:01.576700  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:16:01.576854  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:16:01.728501  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:16:01.728682  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:16:01.728846  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:16:01.928072  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:16:01.929877  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:16:01.929988  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:16:01.930128  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:16:01.930271  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:16:01.930373  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:16:01.930484  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:16:01.930593  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:16:01.930680  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:16:01.930766  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:16:01.931012  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:16:01.931363  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:16:01.931414  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:16:01.931593  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:16:02.054133  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:16:02.301995  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:16:02.372665  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:16:02.823940  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:16:02.844516  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:16:02.844641  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:16:02.844724  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:16:02.995838  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:16:02.997571  152982 out.go:235]   - Booting up control plane ...
	I0826 12:16:02.997707  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:16:02.999055  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:16:03.000691  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:16:03.010427  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:16:03.013494  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:16:43.016147  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:16:43.016271  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:43.016481  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:48.016709  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:48.016976  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:58.017776  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:58.018006  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:18.018369  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:18.018592  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.017759  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:58.018053  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.018084  152982 kubeadm.go:310] 
	I0826 12:17:58.018121  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:17:58.018157  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:17:58.018163  152982 kubeadm.go:310] 
	I0826 12:17:58.018192  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:17:58.018224  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:17:58.018310  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:17:58.018337  152982 kubeadm.go:310] 
	I0826 12:17:58.018477  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:17:58.018537  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:17:58.018619  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:17:58.018633  152982 kubeadm.go:310] 
	I0826 12:17:58.018723  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:17:58.018810  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:17:58.018820  152982 kubeadm.go:310] 
	I0826 12:17:58.019007  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:17:58.019157  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:17:58.019291  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:17:58.019403  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:17:58.019414  152982 kubeadm.go:310] 
	I0826 12:17:58.020426  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:17:58.020541  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:17:58.020627  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 12:17:58.020705  152982 kubeadm.go:394] duration metric: took 7m57.559327665s to StartCluster
	I0826 12:17:58.020799  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:17:58.020875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:17:58.061950  152982 cri.go:89] found id: ""
	I0826 12:17:58.061979  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.061989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:17:58.061998  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:17:58.062057  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:17:58.100419  152982 cri.go:89] found id: ""
	I0826 12:17:58.100451  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.100465  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:17:58.100474  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:17:58.100536  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:17:58.135329  152982 cri.go:89] found id: ""
	I0826 12:17:58.135360  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.135369  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:17:58.135378  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:17:58.135472  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:17:58.169826  152982 cri.go:89] found id: ""
	I0826 12:17:58.169858  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.169870  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:17:58.169888  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:17:58.169958  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:17:58.204549  152982 cri.go:89] found id: ""
	I0826 12:17:58.204583  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.204593  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:17:58.204600  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:17:58.204668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:17:58.241886  152982 cri.go:89] found id: ""
	I0826 12:17:58.241917  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.241926  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:17:58.241933  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:17:58.241997  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:17:58.276159  152982 cri.go:89] found id: ""
	I0826 12:17:58.276194  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.276206  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:17:58.276220  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:17:58.276288  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:17:58.311319  152982 cri.go:89] found id: ""
	I0826 12:17:58.311352  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.311364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:17:58.311377  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:17:58.311394  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:17:58.365300  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:17:58.365352  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:17:58.378933  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:17:58.378972  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:17:58.464890  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:17:58.464920  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:17:58.464939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:17:58.581032  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:17:58.581076  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0826 12:17:58.633835  152982 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 12:17:58.633919  152982 out.go:270] * 
	W0826 12:17:58.634025  152982 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.634049  152982 out.go:270] * 
	W0826 12:17:58.635201  152982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:17:58.639004  152982 out.go:201] 
	W0826 12:17:58.640230  152982 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.640308  152982 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 12:17:58.640327  152982 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 12:17:58.641876  152982 out.go:201] 
	
	
	==> CRI-O <==
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.210221449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675224210199459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09832185-b6af-42e4-8df0-3483694e230c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.210726572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33700a02-d39a-4810-993c-c213c99c3934 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.210805730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33700a02-d39a-4810-993c-c213c99c3934 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.210884371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33700a02-d39a-4810-993c-c213c99c3934 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.242251733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b378a27a-988e-4f7f-b285-42ad9b1b91de name=/runtime.v1.RuntimeService/Version
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.242336804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b378a27a-988e-4f7f-b285-42ad9b1b91de name=/runtime.v1.RuntimeService/Version
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.243499684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fdb8e07-8281-4c95-b73e-8ce9e2ad24c3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.243994123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675224243962154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fdb8e07-8281-4c95-b73e-8ce9e2ad24c3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.244653834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec0cea8e-4122-49c7-a084-fe1eef2f83d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.244710525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec0cea8e-4122-49c7-a084-fe1eef2f83d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.244752500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ec0cea8e-4122-49c7-a084-fe1eef2f83d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.275682432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6eba30b4-cfca-4e65-aceb-d3b23fed1cb8 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.275772586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6eba30b4-cfca-4e65-aceb-d3b23fed1cb8 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.276946488Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=328a2e10-364f-44ff-989a-025eb9141586 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.277384696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675224277361996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=328a2e10-364f-44ff-989a-025eb9141586 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.278124521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97e50217-e838-457c-a6f0-353ef2b892e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.278176888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97e50217-e838-457c-a6f0-353ef2b892e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.278219383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=97e50217-e838-457c-a6f0-353ef2b892e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.309887104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aec4b80b-8506-4f3b-9972-1481e8fcefae name=/runtime.v1.RuntimeService/Version
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.309968979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aec4b80b-8506-4f3b-9972-1481e8fcefae name=/runtime.v1.RuntimeService/Version
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.311612540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb7bc85f-2abf-49ac-ad08-1da0081522ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.312014473Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675224311984100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb7bc85f-2abf-49ac-ad08-1da0081522ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.312525363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a754242-16c9-4bbc-8bc6-32e42d956827 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.312573980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a754242-16c9-4bbc-8bc6-32e42d956827 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:27:04 old-k8s-version-839656 crio[650]: time="2024-08-26 12:27:04.312610619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a754242-16c9-4bbc-8bc6-32e42d956827 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug26 12:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052898] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039892] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.851891] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935402] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.449604] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.385904] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.067684] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067976] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.189122] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.154809] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.263872] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.466854] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.059639] kauditd_printk_skb: 130 callbacks suppressed
	[Aug26 12:10] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	[ +12.058589] kauditd_printk_skb: 46 callbacks suppressed
	[Aug26 12:14] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Aug26 12:16] systemd-fstab-generator[5304]: Ignoring "noauto" option for root device
	[  +0.068224] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:27:04 up 17 min,  0 users,  load average: 0.17, 0.09, 0.06
	Linux old-k8s-version-839656 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: goroutine 148 [syscall]:
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: syscall.Syscall6(0xe8, 0xc, 0xc000999b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000999b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000403fa0, 0x0, 0x0, 0x0)
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000bc3540)
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: goroutine 146 [select]:
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000bc2dc0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001bc8a0, 0x0, 0x0)
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000c52000)
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 26 12:27:04 old-k8s-version-839656 kubelet[6485]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 26 12:27:04 old-k8s-version-839656 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 26 12:27:04 old-k8s-version-839656 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 2 (236.754186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-839656" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (408.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-923586 -n embed-certs-923586
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-26 12:30:42.446691568 +0000 UTC m=+6247.772357293
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-923586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-923586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.105µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-923586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-923586 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-923586 logs -n 25: (1.287561065s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-697869  | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956479                  | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-923586                 | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-839656             | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697869       | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC | 26 Aug 24 12:15 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:28 UTC | 26 Aug 24 12:28 UTC |
	| start   | -p newest-cni-114926 --memory=2200 --alsologtostderr   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:28 UTC | 26 Aug 24 12:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-114926             | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:29 UTC | 26 Aug 24 12:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-114926                                   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:29 UTC | 26 Aug 24 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-114926                  | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:29 UTC | 26 Aug 24 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-114926 --memory=2200 --alsologtostderr   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:29 UTC | 26 Aug 24 12:30 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-114926 image list                           | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-114926                                   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-114926                                   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-114926                                   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	| delete  | -p newest-cni-114926                                   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	| start   | -p auto-814705 --memory=3072                           | auto-814705                  | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	| start   | -p kindnet-814705                                      | kindnet-814705               | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:30:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:30:38.680115  161244 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:30:38.680222  161244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:30:38.680229  161244 out.go:358] Setting ErrFile to fd 2...
	I0826 12:30:38.680234  161244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:30:38.680401  161244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:30:38.681151  161244 out.go:352] Setting JSON to false
	I0826 12:30:38.682958  161244 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7980,"bootTime":1724667459,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:30:38.683205  161244 start.go:139] virtualization: kvm guest
	I0826 12:30:38.685630  161244 out.go:177] * [kindnet-814705] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:30:38.687183  161244 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:30:38.687248  161244 notify.go:220] Checking for updates...
	I0826 12:30:38.689955  161244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:30:38.691314  161244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:30:38.692698  161244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:30:38.693872  161244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:30:38.695102  161244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:30:38.696732  161244 config.go:182] Loaded profile config "auto-814705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:30:38.696870  161244 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:30:38.696951  161244 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:30:38.697049  161244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:30:39.394612  161244 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 12:30:39.396006  161244 start.go:297] selected driver: kvm2
	I0826 12:30:39.396026  161244 start.go:901] validating driver "kvm2" against <nil>
	I0826 12:30:39.396039  161244 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:30:39.396814  161244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:30:39.396917  161244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:30:39.415333  161244 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:30:39.415416  161244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 12:30:39.415627  161244 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:30:39.415681  161244 cni.go:84] Creating CNI manager for "kindnet"
	I0826 12:30:39.415692  161244 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0826 12:30:39.415744  161244 start.go:340] cluster config:
	{Name:kindnet-814705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-814705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:30:39.415830  161244 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:30:39.417796  161244 out.go:177] * Starting "kindnet-814705" primary control-plane node in "kindnet-814705" cluster
	I0826 12:30:37.225104  161060 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 12:30:37.225333  161060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:37.225358  161060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:37.242224  161060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0826 12:30:37.242756  161060 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:37.243503  161060 main.go:141] libmachine: Using API Version  1
	I0826 12:30:37.243525  161060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:37.243966  161060 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:37.244205  161060 main.go:141] libmachine: (auto-814705) Calling .GetMachineName
	I0826 12:30:37.244394  161060 main.go:141] libmachine: (auto-814705) Calling .DriverName
	I0826 12:30:37.244515  161060 start.go:159] libmachine.API.Create for "auto-814705" (driver="kvm2")
	I0826 12:30:37.244547  161060 client.go:168] LocalClient.Create starting
	I0826 12:30:37.244579  161060 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 12:30:37.244613  161060 main.go:141] libmachine: Decoding PEM data...
	I0826 12:30:37.244636  161060 main.go:141] libmachine: Parsing certificate...
	I0826 12:30:37.244700  161060 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 12:30:37.244727  161060 main.go:141] libmachine: Decoding PEM data...
	I0826 12:30:37.244746  161060 main.go:141] libmachine: Parsing certificate...
	I0826 12:30:37.244772  161060 main.go:141] libmachine: Running pre-create checks...
	I0826 12:30:37.244784  161060 main.go:141] libmachine: (auto-814705) Calling .PreCreateCheck
	I0826 12:30:37.245133  161060 main.go:141] libmachine: (auto-814705) Calling .GetConfigRaw
	I0826 12:30:37.245612  161060 main.go:141] libmachine: Creating machine...
	I0826 12:30:37.245630  161060 main.go:141] libmachine: (auto-814705) Calling .Create
	I0826 12:30:37.245782  161060 main.go:141] libmachine: (auto-814705) Creating KVM machine...
	I0826 12:30:37.247740  161060 main.go:141] libmachine: (auto-814705) DBG | found existing default KVM network
	I0826 12:30:37.249027  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:37.248889  161093 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:95:f5} reservation:<nil>}
	I0826 12:30:37.250024  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:37.249904  161093 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:da:41:2d} reservation:<nil>}
	I0826 12:30:37.250869  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:37.250744  161093 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:8a:2c:e0} reservation:<nil>}
	I0826 12:30:37.251996  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:37.251880  161093 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030f400}
	I0826 12:30:37.252053  161060 main.go:141] libmachine: (auto-814705) DBG | created network xml: 
	I0826 12:30:37.252074  161060 main.go:141] libmachine: (auto-814705) DBG | <network>
	I0826 12:30:37.252093  161060 main.go:141] libmachine: (auto-814705) DBG |   <name>mk-auto-814705</name>
	I0826 12:30:37.252104  161060 main.go:141] libmachine: (auto-814705) DBG |   <dns enable='no'/>
	I0826 12:30:37.252117  161060 main.go:141] libmachine: (auto-814705) DBG |   
	I0826 12:30:37.252132  161060 main.go:141] libmachine: (auto-814705) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0826 12:30:37.252147  161060 main.go:141] libmachine: (auto-814705) DBG |     <dhcp>
	I0826 12:30:37.252163  161060 main.go:141] libmachine: (auto-814705) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0826 12:30:37.252173  161060 main.go:141] libmachine: (auto-814705) DBG |     </dhcp>
	I0826 12:30:37.252181  161060 main.go:141] libmachine: (auto-814705) DBG |   </ip>
	I0826 12:30:37.252190  161060 main.go:141] libmachine: (auto-814705) DBG |   
	I0826 12:30:37.252198  161060 main.go:141] libmachine: (auto-814705) DBG | </network>
	I0826 12:30:37.252209  161060 main.go:141] libmachine: (auto-814705) DBG | 
	I0826 12:30:37.259536  161060 main.go:141] libmachine: (auto-814705) DBG | trying to create private KVM network mk-auto-814705 192.168.72.0/24...
	I0826 12:30:37.368861  161060 main.go:141] libmachine: (auto-814705) DBG | private KVM network mk-auto-814705 192.168.72.0/24 created
	I0826 12:30:37.368896  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:37.368813  161093 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:30:37.368921  161060 main.go:141] libmachine: (auto-814705) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/auto-814705 ...
	I0826 12:30:37.368937  161060 main.go:141] libmachine: (auto-814705) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 12:30:37.374203  161060 main.go:141] libmachine: (auto-814705) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 12:30:37.686551  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:37.686348  161093 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/auto-814705/id_rsa...
	I0826 12:30:37.735072  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:37.734908  161093 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/auto-814705/auto-814705.rawdisk...
	I0826 12:30:37.735118  161060 main.go:141] libmachine: (auto-814705) DBG | Writing magic tar header
	I0826 12:30:37.735135  161060 main.go:141] libmachine: (auto-814705) DBG | Writing SSH key tar header
	I0826 12:30:37.735148  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:37.735073  161093 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/auto-814705 ...
	I0826 12:30:37.735265  161060 main.go:141] libmachine: (auto-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/auto-814705
	I0826 12:30:37.735301  161060 main.go:141] libmachine: (auto-814705) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/auto-814705 (perms=drwx------)
	I0826 12:30:37.735318  161060 main.go:141] libmachine: (auto-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 12:30:37.735336  161060 main.go:141] libmachine: (auto-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:30:37.735356  161060 main.go:141] libmachine: (auto-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 12:30:37.735372  161060 main.go:141] libmachine: (auto-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 12:30:37.735387  161060 main.go:141] libmachine: (auto-814705) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 12:30:37.735398  161060 main.go:141] libmachine: (auto-814705) DBG | Checking permissions on dir: /home/jenkins
	I0826 12:30:37.735408  161060 main.go:141] libmachine: (auto-814705) DBG | Checking permissions on dir: /home
	I0826 12:30:37.735417  161060 main.go:141] libmachine: (auto-814705) DBG | Skipping /home - not owner
	I0826 12:30:37.735432  161060 main.go:141] libmachine: (auto-814705) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 12:30:37.735444  161060 main.go:141] libmachine: (auto-814705) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 12:30:37.735482  161060 main.go:141] libmachine: (auto-814705) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 12:30:37.735508  161060 main.go:141] libmachine: (auto-814705) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 12:30:37.735526  161060 main.go:141] libmachine: (auto-814705) Creating domain...
	I0826 12:30:37.736706  161060 main.go:141] libmachine: (auto-814705) define libvirt domain using xml: 
	I0826 12:30:37.736755  161060 main.go:141] libmachine: (auto-814705) <domain type='kvm'>
	I0826 12:30:37.736767  161060 main.go:141] libmachine: (auto-814705)   <name>auto-814705</name>
	I0826 12:30:37.736776  161060 main.go:141] libmachine: (auto-814705)   <memory unit='MiB'>3072</memory>
	I0826 12:30:37.736784  161060 main.go:141] libmachine: (auto-814705)   <vcpu>2</vcpu>
	I0826 12:30:37.736795  161060 main.go:141] libmachine: (auto-814705)   <features>
	I0826 12:30:37.736807  161060 main.go:141] libmachine: (auto-814705)     <acpi/>
	I0826 12:30:37.736816  161060 main.go:141] libmachine: (auto-814705)     <apic/>
	I0826 12:30:37.736823  161060 main.go:141] libmachine: (auto-814705)     <pae/>
	I0826 12:30:37.736833  161060 main.go:141] libmachine: (auto-814705)     
	I0826 12:30:37.736841  161060 main.go:141] libmachine: (auto-814705)   </features>
	I0826 12:30:37.736855  161060 main.go:141] libmachine: (auto-814705)   <cpu mode='host-passthrough'>
	I0826 12:30:37.736861  161060 main.go:141] libmachine: (auto-814705)   
	I0826 12:30:37.736876  161060 main.go:141] libmachine: (auto-814705)   </cpu>
	I0826 12:30:37.736886  161060 main.go:141] libmachine: (auto-814705)   <os>
	I0826 12:30:37.736896  161060 main.go:141] libmachine: (auto-814705)     <type>hvm</type>
	I0826 12:30:37.736906  161060 main.go:141] libmachine: (auto-814705)     <boot dev='cdrom'/>
	I0826 12:30:37.736916  161060 main.go:141] libmachine: (auto-814705)     <boot dev='hd'/>
	I0826 12:30:37.736926  161060 main.go:141] libmachine: (auto-814705)     <bootmenu enable='no'/>
	I0826 12:30:37.736936  161060 main.go:141] libmachine: (auto-814705)   </os>
	I0826 12:30:37.736946  161060 main.go:141] libmachine: (auto-814705)   <devices>
	I0826 12:30:37.736954  161060 main.go:141] libmachine: (auto-814705)     <disk type='file' device='cdrom'>
	I0826 12:30:37.736984  161060 main.go:141] libmachine: (auto-814705)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/auto-814705/boot2docker.iso'/>
	I0826 12:30:37.737002  161060 main.go:141] libmachine: (auto-814705)       <target dev='hdc' bus='scsi'/>
	I0826 12:30:37.737009  161060 main.go:141] libmachine: (auto-814705)       <readonly/>
	I0826 12:30:37.737015  161060 main.go:141] libmachine: (auto-814705)     </disk>
	I0826 12:30:37.737024  161060 main.go:141] libmachine: (auto-814705)     <disk type='file' device='disk'>
	I0826 12:30:37.737032  161060 main.go:141] libmachine: (auto-814705)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 12:30:37.737047  161060 main.go:141] libmachine: (auto-814705)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/auto-814705/auto-814705.rawdisk'/>
	I0826 12:30:37.737056  161060 main.go:141] libmachine: (auto-814705)       <target dev='hda' bus='virtio'/>
	I0826 12:30:37.737062  161060 main.go:141] libmachine: (auto-814705)     </disk>
	I0826 12:30:37.737069  161060 main.go:141] libmachine: (auto-814705)     <interface type='network'>
	I0826 12:30:37.737075  161060 main.go:141] libmachine: (auto-814705)       <source network='mk-auto-814705'/>
	I0826 12:30:37.737082  161060 main.go:141] libmachine: (auto-814705)       <model type='virtio'/>
	I0826 12:30:37.737087  161060 main.go:141] libmachine: (auto-814705)     </interface>
	I0826 12:30:37.737094  161060 main.go:141] libmachine: (auto-814705)     <interface type='network'>
	I0826 12:30:37.737099  161060 main.go:141] libmachine: (auto-814705)       <source network='default'/>
	I0826 12:30:37.737106  161060 main.go:141] libmachine: (auto-814705)       <model type='virtio'/>
	I0826 12:30:37.737123  161060 main.go:141] libmachine: (auto-814705)     </interface>
	I0826 12:30:37.737141  161060 main.go:141] libmachine: (auto-814705)     <serial type='pty'>
	I0826 12:30:37.737152  161060 main.go:141] libmachine: (auto-814705)       <target port='0'/>
	I0826 12:30:37.737159  161060 main.go:141] libmachine: (auto-814705)     </serial>
	I0826 12:30:37.737170  161060 main.go:141] libmachine: (auto-814705)     <console type='pty'>
	I0826 12:30:37.737181  161060 main.go:141] libmachine: (auto-814705)       <target type='serial' port='0'/>
	I0826 12:30:37.737190  161060 main.go:141] libmachine: (auto-814705)     </console>
	I0826 12:30:37.737199  161060 main.go:141] libmachine: (auto-814705)     <rng model='virtio'>
	I0826 12:30:37.737234  161060 main.go:141] libmachine: (auto-814705)       <backend model='random'>/dev/random</backend>
	I0826 12:30:37.737261  161060 main.go:141] libmachine: (auto-814705)     </rng>
	I0826 12:30:37.737275  161060 main.go:141] libmachine: (auto-814705)     
	I0826 12:30:37.737285  161060 main.go:141] libmachine: (auto-814705)     
	I0826 12:30:37.737297  161060 main.go:141] libmachine: (auto-814705)   </devices>
	I0826 12:30:37.737307  161060 main.go:141] libmachine: (auto-814705) </domain>
	I0826 12:30:37.737320  161060 main.go:141] libmachine: (auto-814705) 
	I0826 12:30:37.743384  161060 main.go:141] libmachine: (auto-814705) DBG | domain auto-814705 has defined MAC address 52:54:00:8f:b8:e1 in network default
	I0826 12:30:37.745768  161060 main.go:141] libmachine: (auto-814705) DBG | domain auto-814705 has defined MAC address 52:54:00:47:7f:84 in network mk-auto-814705
	I0826 12:30:37.745799  161060 main.go:141] libmachine: (auto-814705) Ensuring networks are active...
	I0826 12:30:37.746751  161060 main.go:141] libmachine: (auto-814705) Ensuring network default is active
	I0826 12:30:37.847822  161060 main.go:141] libmachine: (auto-814705) Ensuring network mk-auto-814705 is active
	I0826 12:30:37.848619  161060 main.go:141] libmachine: (auto-814705) Getting domain xml...
	I0826 12:30:38.398613  161060 main.go:141] libmachine: (auto-814705) Creating domain...
	I0826 12:30:39.701423  161060 main.go:141] libmachine: (auto-814705) Waiting to get IP...
	I0826 12:30:39.702372  161060 main.go:141] libmachine: (auto-814705) DBG | domain auto-814705 has defined MAC address 52:54:00:47:7f:84 in network mk-auto-814705
	I0826 12:30:39.702867  161060 main.go:141] libmachine: (auto-814705) DBG | unable to find current IP address of domain auto-814705 in network mk-auto-814705
	I0826 12:30:39.702894  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:39.702825  161093 retry.go:31] will retry after 224.757935ms: waiting for machine to come up
	I0826 12:30:39.929377  161060 main.go:141] libmachine: (auto-814705) DBG | domain auto-814705 has defined MAC address 52:54:00:47:7f:84 in network mk-auto-814705
	I0826 12:30:39.929920  161060 main.go:141] libmachine: (auto-814705) DBG | unable to find current IP address of domain auto-814705 in network mk-auto-814705
	I0826 12:30:39.929968  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:39.929873  161093 retry.go:31] will retry after 281.711735ms: waiting for machine to come up
	I0826 12:30:40.213560  161060 main.go:141] libmachine: (auto-814705) DBG | domain auto-814705 has defined MAC address 52:54:00:47:7f:84 in network mk-auto-814705
	I0826 12:30:40.214041  161060 main.go:141] libmachine: (auto-814705) DBG | unable to find current IP address of domain auto-814705 in network mk-auto-814705
	I0826 12:30:40.214075  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:40.214001  161093 retry.go:31] will retry after 383.997522ms: waiting for machine to come up
	I0826 12:30:40.599693  161060 main.go:141] libmachine: (auto-814705) DBG | domain auto-814705 has defined MAC address 52:54:00:47:7f:84 in network mk-auto-814705
	I0826 12:30:40.600185  161060 main.go:141] libmachine: (auto-814705) DBG | unable to find current IP address of domain auto-814705 in network mk-auto-814705
	I0826 12:30:40.600216  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:40.600142  161093 retry.go:31] will retry after 500.780433ms: waiting for machine to come up
	I0826 12:30:41.102911  161060 main.go:141] libmachine: (auto-814705) DBG | domain auto-814705 has defined MAC address 52:54:00:47:7f:84 in network mk-auto-814705
	I0826 12:30:41.103412  161060 main.go:141] libmachine: (auto-814705) DBG | unable to find current IP address of domain auto-814705 in network mk-auto-814705
	I0826 12:30:41.103438  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:41.103384  161093 retry.go:31] will retry after 550.115453ms: waiting for machine to come up
	I0826 12:30:41.655040  161060 main.go:141] libmachine: (auto-814705) DBG | domain auto-814705 has defined MAC address 52:54:00:47:7f:84 in network mk-auto-814705
	I0826 12:30:41.655505  161060 main.go:141] libmachine: (auto-814705) DBG | unable to find current IP address of domain auto-814705 in network mk-auto-814705
	I0826 12:30:41.655540  161060 main.go:141] libmachine: (auto-814705) DBG | I0826 12:30:41.655461  161093 retry.go:31] will retry after 592.764842ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.153959578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0452767c-a5e1-4768-941a-43088b2c2943 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.155530734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24f8d468-c6cc-4beb-a73f-a628205ac381 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.155938712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675443155914326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24f8d468-c6cc-4beb-a73f-a628205ac381 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.156509873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32acdbb4-c9ce-412a-b8ae-976a979dd3a8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.156564327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32acdbb4-c9ce-412a-b8ae-976a979dd3a8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.156768079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1,PodSandboxId:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674484503953305,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7,PodSandboxId:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484112152089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c,PodSandboxId:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484006082733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
cc20f31-6d6c-4104-93c3-29c1b94de93c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53,PodSandboxId:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724674483191229626,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54,PodSandboxId:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674472509376763,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569,PodSandboxId:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674472507858419,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0,PodSandboxId:bc83f2c08a3238a4e1efabba74708cc62077b6de0debf8a7469b9636662d21e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674472483443416,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68,PodSandboxId:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674472407626934,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1,PodSandboxId:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674180742107584,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32acdbb4-c9ce-412a-b8ae-976a979dd3a8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.173125019Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6f295df3-85d9-4c5d-9d1a-07d08df6d996 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.174749873Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:26a61de1ca1dcfdec0206230346b6d28edc6fa653a5c17b0b8bf4b953e99ae6e,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-k6mkf,Uid:45ba4fff-060e-4b04-b86c-8e25918b739e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674484576599431,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-k6mkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ba4fff-060e-4b04-b86c-8e25918b739e,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:14:44.268394395Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3acbf90c-c596-49df-8b5c-2a43f90d2008,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674484352473471,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-26T12:14:44.044635179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5tpbm,Uid:3cc20f31-6d6c-4104-93c3-29c1b94de93c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674483105663947,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc20f31-6d6c-4104-93c3-29c1b94de93c,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:14:42.797288920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-dhm6d,Uid:a6a9c3c6-91e8-4232
-8cd6-16233be0350f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674483069875330,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:14:42.762189997Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&PodSandboxMetadata{Name:kube-proxy-xnv2b,Uid:b380ae46-11a4-44f2-99b1-428fa493fe99,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674482970455278,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:14:42.654581777Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-923586,Uid:62dde358041546cd4c8d10635104e748,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724674472277843712,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: 62dde358041546cd4c8d10635104e748,kubernetes.io/config.seen: 2024-08-26T12:14:31.813120295Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc83f2c08a3238a4e1efabba74708c
c62077b6de0debf8a7469b9636662d21e9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-923586,Uid:0ad11a11286a378d39ef8ea1f691c2ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674472271373644,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0ad11a11286a378d39ef8ea1f691c2ba,kubernetes.io/config.seen: 2024-08-26T12:14:31.813121521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-923586,Uid:c4e9a5b61a4e54109adeb13ea75b637d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674472264478250,Labels:map[string]string{component: etcd,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.6:2379,kubernetes.io/config.hash: c4e9a5b61a4e54109adeb13ea75b637d,kubernetes.io/config.seen: 2024-08-26T12:14:31.813116275Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-923586,Uid:2c303dd3b5142852f39eb09b283dc6d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674472248503603,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,tier: control-plane,},Annotations:map[string]
string{kubernetes.io/config.hash: 2c303dd3b5142852f39eb09b283dc6d7,kubernetes.io/config.seen: 2024-08-26T12:14:31.813122320Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-923586,Uid:62dde358041546cd4c8d10635104e748,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724674180574371238,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: 62dde358041546cd4c8d10635104e748,kubernetes.io/config.seen: 2024-08-26T12:09:40.091151431Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/
interceptors.go:74" id=6f295df3-85d9-4c5d-9d1a-07d08df6d996 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.175731203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6fea941-e02f-4c2d-8f07-a4b41a060fce name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.175792842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6fea941-e02f-4c2d-8f07-a4b41a060fce name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.176087441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1,PodSandboxId:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674484503953305,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7,PodSandboxId:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484112152089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c,PodSandboxId:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484006082733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
cc20f31-6d6c-4104-93c3-29c1b94de93c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53,PodSandboxId:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724674483191229626,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54,PodSandboxId:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674472509376763,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569,PodSandboxId:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674472507858419,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0,PodSandboxId:bc83f2c08a3238a4e1efabba74708cc62077b6de0debf8a7469b9636662d21e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674472483443416,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68,PodSandboxId:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674472407626934,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1,PodSandboxId:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674180742107584,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6fea941-e02f-4c2d-8f07-a4b41a060fce name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.204184822Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f50c932-fb17-46f5-8808-80e1d914ba93 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.204287764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f50c932-fb17-46f5-8808-80e1d914ba93 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.205659652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c1e52b7-5fd4-4565-b0ab-f7cc16de6fca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.206148748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675443206121527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c1e52b7-5fd4-4565-b0ab-f7cc16de6fca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.206799990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ef072e1-3c37-4da5-96cb-c272d3d8a836 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.206874470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ef072e1-3c37-4da5-96cb-c272d3d8a836 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.207511088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1,PodSandboxId:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674484503953305,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7,PodSandboxId:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484112152089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c,PodSandboxId:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484006082733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
cc20f31-6d6c-4104-93c3-29c1b94de93c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53,PodSandboxId:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724674483191229626,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54,PodSandboxId:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674472509376763,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569,PodSandboxId:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674472507858419,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0,PodSandboxId:bc83f2c08a3238a4e1efabba74708cc62077b6de0debf8a7469b9636662d21e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674472483443416,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68,PodSandboxId:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674472407626934,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1,PodSandboxId:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674180742107584,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ef072e1-3c37-4da5-96cb-c272d3d8a836 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.244843161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73ccc980-e962-42c1-8084-0cd2a1fd3c15 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.244925346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73ccc980-e962-42c1-8084-0cd2a1fd3c15 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.246286739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=363057fe-fca5-42d0-a0e9-a9664828df7e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.246753926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675443246726299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=363057fe-fca5-42d0-a0e9-a9664828df7e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.247255853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c42036a-55f2-443f-bf66-c75cd36e615b name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.247313385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c42036a-55f2-443f-bf66-c75cd36e615b name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:43 embed-certs-923586 crio[759]: time="2024-08-26 12:30:43.247528984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1,PodSandboxId:95ba53d3d629ca673a9c675faa5baa12c8edb57fd78f623d9289c7717ac3a62c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674484503953305,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acbf90c-c596-49df-8b5c-2a43f90d2008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7,PodSandboxId:11bfddc0d69c58c4577987bc2545f68245d98dfb9857bd59c9c5ffec47ea4e06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484112152089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dhm6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6a9c3c6-91e8-4232-8cd6-16233be0350f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c,PodSandboxId:8648db7ef81da0d802fe4b66bf0512b9e0c303f9cd82e2910f1a159c38dfea02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674484006082733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5tpbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
cc20f31-6d6c-4104-93c3-29c1b94de93c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53,PodSandboxId:de50a633d662b3d728467fd575b8f1fa36a951d0fccd3472b9348b07ff6a84a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724674483191229626,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnv2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b380ae46-11a4-44f2-99b1-428fa493fe99,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54,PodSandboxId:705db35e71bc975f17cb42a02f1b5d7c640fd8fa51f8981867ef67ecc5eaf329,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674472509376763,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e9a5b61a4e54109adeb13ea75b637d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569,PodSandboxId:f5205492b42488ac8f16b9c2b3168a3f23695cdeadf3e432dc5ec827c566ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674472507858419,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0,PodSandboxId:bc83f2c08a3238a4e1efabba74708cc62077b6de0debf8a7469b9636662d21e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674472483443416,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad11a11286a378d39ef8ea1f691c2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68,PodSandboxId:f9036d4abc01282804b310c2943ae13c382488c344f8ced0709efcd80eeba42e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674472407626934,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c303dd3b5142852f39eb09b283dc6d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1,PodSandboxId:13c46e49fe89bb249520f1078ccc0440a3c8bd9591c96cb9127f0ff56a89b63b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674180742107584,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-923586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62dde358041546cd4c8d10635104e748,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c42036a-55f2-443f-bf66-c75cd36e615b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f40a433b56c54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   95ba53d3d629c       storage-provisioner
	c045d48a96954       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   11bfddc0d69c5       coredns-6f6b679f8f-dhm6d
	d197649fa398f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   8648db7ef81da       coredns-6f6b679f8f-5tpbm
	18f87b3516f38       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   de50a633d662b       kube-proxy-xnv2b
	aef384d663a79       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   705db35e71bc9       etcd-embed-certs-923586
	70ed553437bb6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   f5205492b4248       kube-apiserver-embed-certs-923586
	12139aa5cc435       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   bc83f2c08a323       kube-controller-manager-embed-certs-923586
	85351f65cbd4e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   f9036d4abc012       kube-scheduler-embed-certs-923586
	75c024c43279a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   13c46e49fe89b       kube-apiserver-embed-certs-923586
	
	
	==> coredns [c045d48a969545e366a683fbae0fae101579d92eefeba8f8fbf58140dd7ccfb7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d197649fa398ff932c988ef5da19b69336c526233d0c962a3cf899c0ac31bb3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-923586
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-923586
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=embed-certs-923586
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T12_14_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 12:14:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-923586
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:30:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:30:07 +0000   Mon, 26 Aug 2024 12:14:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:30:07 +0000   Mon, 26 Aug 2024 12:14:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:30:07 +0000   Mon, 26 Aug 2024 12:14:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:30:07 +0000   Mon, 26 Aug 2024 12:14:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    embed-certs-923586
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6564d7e8f389450fb4fe90c3322850d2
	  System UUID:                6564d7e8-f389-450f-b4fe-90c3322850d2
	  Boot ID:                    ae96d933-1d15-4391-92f0-4db7ffbeb091
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5tpbm                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-dhm6d                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-923586                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-923586             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-923586    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-xnv2b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-923586             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-k6mkf               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-923586 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-923586 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-923586 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-923586 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-923586 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-923586 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-923586 event: Registered Node embed-certs-923586 in Controller
	
	
	==> dmesg <==
	[  +0.037979] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.746956] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.936129] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.553525] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.647179] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.056376] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061544] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.201679] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +0.130781] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +0.319729] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[  +4.213954] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.067298] kauditd_printk_skb: 154 callbacks suppressed
	[  +2.226302] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +4.582380] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.837328] kauditd_printk_skb: 109 callbacks suppressed
	[Aug26 12:13] kauditd_printk_skb: 2 callbacks suppressed
	[Aug26 12:14] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +0.068130] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.009437] systemd-fstab-generator[3168]: Ignoring "noauto" option for root device
	[  +0.102820] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.343563] systemd-fstab-generator[3283]: Ignoring "noauto" option for root device
	[  +0.118670] kauditd_printk_skb: 12 callbacks suppressed
	[Aug26 12:15] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [aef384d663a796b23ac61a00c23080413d03e6065895a485c418c08ec0677d54] <==
	{"level":"info","ts":"2024-08-26T12:14:33.024118Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T12:14:33.038536Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T12:14:33.024716Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:14:33.028115Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:14:33.044450Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:14:33.044511Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:14:33.080098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.6:2379"}
	{"level":"info","ts":"2024-08-26T12:24:33.354233Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-08-26T12:24:33.365369Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":722,"took":"10.231037ms","hash":3996393818,"current-db-size-bytes":2482176,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2482176,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-08-26T12:24:33.365523Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3996393818,"revision":722,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-26T12:29:27.863759Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.461814ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T12:29:27.864672Z","caller":"traceutil/trace.go:171","msg":"trace[1346721610] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1205; }","duration":"234.245596ms","start":"2024-08-26T12:29:27.630215Z","end":"2024-08-26T12:29:27.864460Z","steps":["trace[1346721610] 'range keys from in-memory index tree'  (duration: 233.439706ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T12:29:33.361566Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":965}
	{"level":"info","ts":"2024-08-26T12:29:33.366406Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":965,"took":"4.25412ms","hash":3838586627,"current-db-size-bytes":2482176,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1708032,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-26T12:29:33.366508Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3838586627,"revision":965,"compact-revision":722}
	{"level":"info","ts":"2024-08-26T12:30:22.138377Z","caller":"traceutil/trace.go:171","msg":"trace[897666116] linearizableReadLoop","detail":"{readStateIndex:1453; appliedIndex:1452; }","duration":"144.810172ms","start":"2024-08-26T12:30:21.993535Z","end":"2024-08-26T12:30:22.138345Z","steps":["trace[897666116] 'read index received'  (duration: 144.591738ms)","trace[897666116] 'applied index is now lower than readState.Index'  (duration: 217.748µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T12:30:22.138545Z","caller":"traceutil/trace.go:171","msg":"trace[993516396] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"362.92719ms","start":"2024-08-26T12:30:21.775607Z","end":"2024-08-26T12:30:22.138535Z","steps":["trace[993516396] 'process raft request'  (duration: 362.578181ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T12:30:22.139252Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-26T12:30:21.775583Z","time spent":"362.98596ms","remote":"127.0.0.1:40356","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1249 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-26T12:30:22.139554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.024555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T12:30:22.139614Z","caller":"traceutil/trace.go:171","msg":"trace[598321063] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"146.090215ms","start":"2024-08-26T12:30:21.993513Z","end":"2024-08-26T12:30:22.139603Z","steps":["trace[598321063] 'agreement among raft nodes before linearized reading'  (duration: 145.999565ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T12:30:22.582702Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.578557ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11349231102661984846 > lease_revoke:<id:1d80918e9bfb81f1>","response":"size:27"}
	{"level":"info","ts":"2024-08-26T12:30:22.582823Z","caller":"traceutil/trace.go:171","msg":"trace[1799938666] linearizableReadLoop","detail":"{readStateIndex:1454; appliedIndex:1453; }","duration":"361.674968ms","start":"2024-08-26T12:30:22.221135Z","end":"2024-08-26T12:30:22.582810Z","steps":["trace[1799938666] 'read index received'  (duration: 44.815625ms)","trace[1799938666] 'applied index is now lower than readState.Index'  (duration: 316.857846ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-26T12:30:22.582900Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.768635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-26T12:30:22.582919Z","caller":"traceutil/trace.go:171","msg":"trace[857463374] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1250; }","duration":"361.80121ms","start":"2024-08-26T12:30:22.221112Z","end":"2024-08-26T12:30:22.582913Z","steps":["trace[857463374] 'agreement among raft nodes before linearized reading'  (duration: 361.740774ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T12:30:22.582968Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-26T12:30:22.221075Z","time spent":"361.884899ms","remote":"127.0.0.1:40356","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":4,"response size":29,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true "}
	
	
	==> kernel <==
	 12:30:43 up 21 min,  0 users,  load average: 0.17, 0.15, 0.15
	Linux embed-certs-923586 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [70ed553437bb6b1f8c4b39c8001eb73f0c6381ae8bb872737a0bce5a11916569] <==
	I0826 12:27:35.917250       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:27:35.917389       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:29:34.916690       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:29:34.917229       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0826 12:29:35.919796       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:29:35.919840       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0826 12:29:35.919873       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:29:35.919926       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:29:35.921080       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:29:35.921129       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:30:35.921842       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:30:35.921912       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0826 12:30:35.922370       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:30:35.922512       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:30:35.923145       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:30:35.924342       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [75c024c43279acdeaf991b6247aacee1ad3912b3e4d4e61aaeaaa845977f3cd1] <==
	W0826 12:14:27.212572       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.265769       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.324642       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.354677       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.418406       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.432293       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.476133       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.476133       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.514766       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.550380       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.595233       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.637556       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.776434       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.809423       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.842554       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:27.961287       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.049768       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.066628       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.090308       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.091670       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.124425       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.158843       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.192450       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.316145       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:28.439167       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [12139aa5cc4356dd4b10b4004e75653a0ba88ce472e00d331d7bbd9e67aeedf0] <==
	E0826 12:25:41.931541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:25:42.391217       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:25:44.494616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.893µs"
	E0826 12:26:11.938499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:26:12.400897       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:26:41.948346       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:26:42.409630       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:27:11.954766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:27:12.417793       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:27:41.961977       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:27:42.429411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:28:11.968629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:28:12.438497       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:28:41.975386       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:28:42.446893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:29:11.982895       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:29:12.455151       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:29:41.991301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:29:42.465408       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:30:07.381321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-923586"
	E0826 12:30:11.997799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:30:12.473160       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:30:40.502103       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="338.607µs"
	E0826 12:30:42.006745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:30:42.482925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [18f87b3516f3813aa370fa324c714f6eb63c7b4ee464cfe29afdd7e86a8b2a53] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 12:14:43.754958       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 12:14:43.770406       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0826 12:14:43.770539       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 12:14:43.965547       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 12:14:43.965595       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 12:14:43.970132       1 server_linux.go:169] "Using iptables Proxier"
	I0826 12:14:43.989208       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 12:14:43.989497       1 server.go:483] "Version info" version="v1.31.0"
	I0826 12:14:43.989525       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:14:43.998910       1 config.go:197] "Starting service config controller"
	I0826 12:14:43.998944       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 12:14:43.998984       1 config.go:104] "Starting endpoint slice config controller"
	I0826 12:14:43.998988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 12:14:44.002358       1 config.go:326] "Starting node config controller"
	I0826 12:14:44.003947       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 12:14:44.099948       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 12:14:44.100087       1 shared_informer.go:320] Caches are synced for service config
	I0826 12:14:44.107697       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [85351f65cbd4e55fdcf6035d8270175d3b66c1dc17cd886dbbda2869ce442d68] <==
	W0826 12:14:34.902602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 12:14:34.902792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:34.903002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 12:14:34.903086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.717749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 12:14:35.717956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.786826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 12:14:35.787368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.788692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 12:14:35.788875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.835838       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 12:14:35.836246       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 12:14:35.881367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 12:14:35.881593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.896591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 12:14:35.897318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:35.939364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0826 12:14:35.939656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:36.093198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 12:14:36.093306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:36.121125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0826 12:14:36.121313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:14:36.245759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 12:14:36.245976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0826 12:14:38.473990       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:29:46 embed-certs-923586 kubelet[3174]: E0826 12:29:46.478370    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:29:47 embed-certs-923586 kubelet[3174]: E0826 12:29:47.780101    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675387779660251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:29:47 embed-certs-923586 kubelet[3174]: E0826 12:29:47.780142    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675387779660251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:29:57 embed-certs-923586 kubelet[3174]: E0826 12:29:57.782251    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675397781629239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:29:57 embed-certs-923586 kubelet[3174]: E0826 12:29:57.782320    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675397781629239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:00 embed-certs-923586 kubelet[3174]: E0826 12:30:00.479203    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:30:07 embed-certs-923586 kubelet[3174]: E0826 12:30:07.783701    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675407783281063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:07 embed-certs-923586 kubelet[3174]: E0826 12:30:07.783747    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675407783281063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:15 embed-certs-923586 kubelet[3174]: E0826 12:30:15.477770    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:30:17 embed-certs-923586 kubelet[3174]: E0826 12:30:17.785414    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675417784944111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:17 embed-certs-923586 kubelet[3174]: E0826 12:30:17.785444    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675417784944111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:27 embed-certs-923586 kubelet[3174]: E0826 12:30:27.787143    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675427786626373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:27 embed-certs-923586 kubelet[3174]: E0826 12:30:27.787188    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675427786626373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:28 embed-certs-923586 kubelet[3174]: E0826 12:30:28.494548    3174 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 26 12:30:28 embed-certs-923586 kubelet[3174]: E0826 12:30:28.494653    3174 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 26 12:30:28 embed-certs-923586 kubelet[3174]: E0826 12:30:28.494972    3174 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w82td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-k6mkf_kube-system(45ba4fff-060e-4b04-b86c-8e25918b739e): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 26 12:30:28 embed-certs-923586 kubelet[3174]: E0826 12:30:28.496420    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	Aug 26 12:30:37 embed-certs-923586 kubelet[3174]: E0826 12:30:37.500091    3174 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 12:30:37 embed-certs-923586 kubelet[3174]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 12:30:37 embed-certs-923586 kubelet[3174]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 12:30:37 embed-certs-923586 kubelet[3174]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 12:30:37 embed-certs-923586 kubelet[3174]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 12:30:37 embed-certs-923586 kubelet[3174]: E0826 12:30:37.789223    3174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675437788440021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:37 embed-certs-923586 kubelet[3174]: E0826 12:30:37.789722    3174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675437788440021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:40 embed-certs-923586 kubelet[3174]: E0826 12:30:40.478588    3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k6mkf" podUID="45ba4fff-060e-4b04-b86c-8e25918b739e"
	
	
	==> storage-provisioner [f40a433b56c5410454c228c7e97d153affa1780aade34beb2d81aaf98ad33dc1] <==
	I0826 12:14:44.594429       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 12:14:44.621641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 12:14:44.621702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 12:14:44.692318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 12:14:44.692519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-923586_746dd995-b354-4c81-89b3-3df0e4ac3edc!
	I0826 12:14:44.694919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afa6ca5f-0150-4138-b04c-b2f58ecad9f9", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-923586_746dd995-b354-4c81-89b3-3df0e4ac3edc became leader
	I0826 12:14:44.795805       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-923586_746dd995-b354-4c81-89b3-3df0e4ac3edc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-923586 -n embed-certs-923586
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-923586 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-k6mkf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-923586 describe pod metrics-server-6867b74b74-k6mkf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-923586 describe pod metrics-server-6867b74b74-k6mkf: exit status 1 (68.567404ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-k6mkf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-923586 describe pod metrics-server-6867b74b74-k6mkf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (408.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (532.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-26 12:33:19.35627548 +0000 UTC m=+6404.681941199
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-697869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-697869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.56µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-697869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-697869 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-697869 logs -n 25: (2.192206381s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-814705 sudo                               | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo                               | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo cat                           | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo cat                           | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo                               | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo                               | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo                               | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo cat                           | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo cat                           | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo                               | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo                               | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo                               | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo find                          | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-814705 sudo crio                          | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-814705                                    | kindnet-814705            | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	| start   | -p enable-default-cni-814705                         | enable-default-cni-814705 | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-814705 pgrep -a                            | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:32 UTC | 26 Aug 24 12:32 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p calico-814705 sudo cat                            | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:33 UTC | 26 Aug 24 12:33 UTC |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p calico-814705 sudo cat                            | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:33 UTC | 26 Aug 24 12:33 UTC |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-814705 sudo cat                            | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:33 UTC | 26 Aug 24 12:33 UTC |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p calico-814705 sudo crictl                         | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:33 UTC | 26 Aug 24 12:33 UTC |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-814705 sudo crictl                         | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:33 UTC | 26 Aug 24 12:33 UTC |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p calico-814705 sudo find                           | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:33 UTC | 26 Aug 24 12:33 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-814705 sudo ip a s                         | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:33 UTC | 26 Aug 24 12:33 UTC |
	| ssh     | -p calico-814705 sudo ip r s                         | calico-814705             | jenkins | v1.33.1 | 26 Aug 24 12:33 UTC | 26 Aug 24 12:33 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:32:48
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:32:48.200784  165333 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:32:48.201095  165333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:32:48.201108  165333 out.go:358] Setting ErrFile to fd 2...
	I0826 12:32:48.201115  165333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:32:48.201397  165333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:32:48.202301  165333 out.go:352] Setting JSON to false
	I0826 12:32:48.203844  165333 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8109,"bootTime":1724667459,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:32:48.203964  165333 start.go:139] virtualization: kvm guest
	I0826 12:32:48.206342  165333 out.go:177] * [enable-default-cni-814705] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:32:48.207784  165333 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:32:48.207861  165333 notify.go:220] Checking for updates...
	I0826 12:32:48.211194  165333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:32:48.212586  165333 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:32:48.213944  165333 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:32:48.215191  165333 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:32:48.216739  165333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:32:48.218932  165333 config.go:182] Loaded profile config "calico-814705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:32:48.219138  165333 config.go:182] Loaded profile config "custom-flannel-814705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:32:48.219307  165333 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:32:48.219471  165333 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:32:48.262111  165333 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 12:32:48.263518  165333 start.go:297] selected driver: kvm2
	I0826 12:32:48.263543  165333 start.go:901] validating driver "kvm2" against <nil>
	I0826 12:32:48.263556  165333 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:32:48.264332  165333 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:32:48.264446  165333 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:32:48.282488  165333 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:32:48.282567  165333 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0826 12:32:48.282878  165333 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0826 12:32:48.282914  165333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:32:48.282969  165333 cni.go:84] Creating CNI manager for "bridge"
	I0826 12:32:48.282977  165333 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 12:32:48.283090  165333 start.go:340] cluster config:
	{Name:enable-default-cni-814705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-814705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:32:48.283255  165333 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:32:48.286209  165333 out.go:177] * Starting "enable-default-cni-814705" primary control-plane node in "enable-default-cni-814705" cluster
	I0826 12:32:48.287451  165333 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:32:48.287514  165333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:32:48.287531  165333 cache.go:56] Caching tarball of preloaded images
	I0826 12:32:48.287634  165333 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:32:48.287655  165333 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:32:48.287871  165333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/enable-default-cni-814705/config.json ...
	I0826 12:32:48.287902  165333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/enable-default-cni-814705/config.json: {Name:mk5cfdff6342d5bb27ff843084f749ee53479a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:32:48.288357  165333 start.go:360] acquireMachinesLock for enable-default-cni-814705: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:32:48.288408  165333 start.go:364] duration metric: took 28.248µs to acquireMachinesLock for "enable-default-cni-814705"
	I0826 12:32:48.288432  165333 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-814705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-814705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:32:48.288516  165333 start.go:125] createHost starting for "" (driver="kvm2")
	I0826 12:32:45.758122  163599 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0826 12:32:46.015644  163599 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0826 12:32:46.221717  163599 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0826 12:32:46.404021  163599 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0826 12:32:46.786517  163599 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0826 12:32:46.786886  163599 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-814705 localhost] and IPs [192.168.72.43 127.0.0.1 ::1]
	I0826 12:32:46.955494  163599 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0826 12:32:46.955706  163599 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-814705 localhost] and IPs [192.168.72.43 127.0.0.1 ::1]
	I0826 12:32:47.144803  163599 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0826 12:32:47.302571  163599 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0826 12:32:47.440608  163599 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0826 12:32:47.440872  163599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:32:47.744346  163599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:32:47.980284  163599 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:32:48.046744  163599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:32:48.486247  163599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:32:48.931347  163599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:32:48.932227  163599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:32:48.941963  163599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:32:47.266104  161529 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-9bjnw" in "kube-system" namespace has status "Ready":"False"
	I0826 12:32:49.766064  161529 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-9bjnw" in "kube-system" namespace has status "Ready":"False"
	I0826 12:32:48.944388  163599 out.go:235]   - Booting up control plane ...
	I0826 12:32:48.944523  163599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:32:48.944633  163599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:32:48.945270  163599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:32:48.977554  163599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:32:48.988008  163599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:32:48.988122  163599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:32:49.159094  163599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:32:49.159272  163599 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:32:50.159841  163599 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001413355s
	I0826 12:32:50.159944  163599 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:32:50.772962  161529 pod_ready.go:93] pod "calico-kube-controllers-7fbd86d5c5-9bjnw" in "kube-system" namespace has status "Ready":"True"
	I0826 12:32:50.772997  161529 pod_ready.go:82] duration metric: took 18.517388543s for pod "calico-kube-controllers-7fbd86d5c5-9bjnw" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.773013  161529 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-4xgnq" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.782302  161529 pod_ready.go:93] pod "calico-node-4xgnq" in "kube-system" namespace has status "Ready":"True"
	I0826 12:32:50.782335  161529 pod_ready.go:82] duration metric: took 9.312908ms for pod "calico-node-4xgnq" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.782350  161529 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-pv86b" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.791680  161529 pod_ready.go:93] pod "coredns-6f6b679f8f-pv86b" in "kube-system" namespace has status "Ready":"True"
	I0826 12:32:50.791704  161529 pod_ready.go:82] duration metric: took 9.347032ms for pod "coredns-6f6b679f8f-pv86b" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.791714  161529 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-814705" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.800155  161529 pod_ready.go:93] pod "etcd-calico-814705" in "kube-system" namespace has status "Ready":"True"
	I0826 12:32:50.800179  161529 pod_ready.go:82] duration metric: took 8.45891ms for pod "etcd-calico-814705" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.800189  161529 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-814705" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.809341  161529 pod_ready.go:93] pod "kube-apiserver-calico-814705" in "kube-system" namespace has status "Ready":"True"
	I0826 12:32:50.809375  161529 pod_ready.go:82] duration metric: took 9.178392ms for pod "kube-apiserver-calico-814705" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:50.809392  161529 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-814705" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:51.161019  161529 pod_ready.go:93] pod "kube-controller-manager-calico-814705" in "kube-system" namespace has status "Ready":"True"
	I0826 12:32:51.161051  161529 pod_ready.go:82] duration metric: took 351.649291ms for pod "kube-controller-manager-calico-814705" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:51.161065  161529 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-vmnmq" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:51.560575  161529 pod_ready.go:93] pod "kube-proxy-vmnmq" in "kube-system" namespace has status "Ready":"True"
	I0826 12:32:51.560606  161529 pod_ready.go:82] duration metric: took 399.531731ms for pod "kube-proxy-vmnmq" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:51.560620  161529 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-814705" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:51.962498  161529 pod_ready.go:93] pod "kube-scheduler-calico-814705" in "kube-system" namespace has status "Ready":"True"
	I0826 12:32:51.962528  161529 pod_ready.go:82] duration metric: took 401.900241ms for pod "kube-scheduler-calico-814705" in "kube-system" namespace to be "Ready" ...
	I0826 12:32:51.962543  161529 pod_ready.go:39] duration metric: took 19.716096612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:32:51.962561  161529 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:32:51.962632  161529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:32:51.980936  161529 api_server.go:72] duration metric: took 29.122803244s to wait for apiserver process to appear ...
	I0826 12:32:51.980968  161529 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:32:51.980992  161529 api_server.go:253] Checking apiserver healthz at https://192.168.50.87:8443/healthz ...
	I0826 12:32:51.986388  161529 api_server.go:279] https://192.168.50.87:8443/healthz returned 200:
	ok
	I0826 12:32:51.987544  161529 api_server.go:141] control plane version: v1.31.0
	I0826 12:32:51.987573  161529 api_server.go:131] duration metric: took 6.596949ms to wait for apiserver health ...
	I0826 12:32:51.987584  161529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:32:52.164205  161529 system_pods.go:59] 9 kube-system pods found
	I0826 12:32:52.164267  161529 system_pods.go:61] "calico-kube-controllers-7fbd86d5c5-9bjnw" [9566bebe-7247-458c-8dd2-3e8e5abfd0d4] Running
	I0826 12:32:52.164276  161529 system_pods.go:61] "calico-node-4xgnq" [e77ff4d4-1fd3-4006-889c-c8ec21ad86e9] Running
	I0826 12:32:52.164279  161529 system_pods.go:61] "coredns-6f6b679f8f-pv86b" [48fddaa8-6098-43db-a2cf-4cefd17f347e] Running
	I0826 12:32:52.164283  161529 system_pods.go:61] "etcd-calico-814705" [90c04819-8bf1-4826-8868-34a0693338e3] Running
	I0826 12:32:52.164286  161529 system_pods.go:61] "kube-apiserver-calico-814705" [03a2fc97-7268-491b-a84e-a7be41f0b456] Running
	I0826 12:32:52.164289  161529 system_pods.go:61] "kube-controller-manager-calico-814705" [56338800-dee7-44a3-b996-ee5598c94c40] Running
	I0826 12:32:52.164292  161529 system_pods.go:61] "kube-proxy-vmnmq" [bb51a2a7-33f2-4336-9858-8b3f507c9960] Running
	I0826 12:32:52.164296  161529 system_pods.go:61] "kube-scheduler-calico-814705" [9d2d4223-1100-45ac-bded-59cedb73891c] Running
	I0826 12:32:52.164301  161529 system_pods.go:61] "storage-provisioner" [8cc84165-77c3-449f-b8f8-3d598d08dfca] Running
	I0826 12:32:52.164309  161529 system_pods.go:74] duration metric: took 176.71804ms to wait for pod list to return data ...
	I0826 12:32:52.164318  161529 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:32:52.360895  161529 default_sa.go:45] found service account: "default"
	I0826 12:32:52.360927  161529 default_sa.go:55] duration metric: took 196.599576ms for default service account to be created ...
	I0826 12:32:52.360938  161529 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:32:52.566445  161529 system_pods.go:86] 9 kube-system pods found
	I0826 12:32:52.566482  161529 system_pods.go:89] "calico-kube-controllers-7fbd86d5c5-9bjnw" [9566bebe-7247-458c-8dd2-3e8e5abfd0d4] Running
	I0826 12:32:52.566492  161529 system_pods.go:89] "calico-node-4xgnq" [e77ff4d4-1fd3-4006-889c-c8ec21ad86e9] Running
	I0826 12:32:52.566498  161529 system_pods.go:89] "coredns-6f6b679f8f-pv86b" [48fddaa8-6098-43db-a2cf-4cefd17f347e] Running
	I0826 12:32:52.566503  161529 system_pods.go:89] "etcd-calico-814705" [90c04819-8bf1-4826-8868-34a0693338e3] Running
	I0826 12:32:52.566509  161529 system_pods.go:89] "kube-apiserver-calico-814705" [03a2fc97-7268-491b-a84e-a7be41f0b456] Running
	I0826 12:32:52.566514  161529 system_pods.go:89] "kube-controller-manager-calico-814705" [56338800-dee7-44a3-b996-ee5598c94c40] Running
	I0826 12:32:52.566519  161529 system_pods.go:89] "kube-proxy-vmnmq" [bb51a2a7-33f2-4336-9858-8b3f507c9960] Running
	I0826 12:32:52.566524  161529 system_pods.go:89] "kube-scheduler-calico-814705" [9d2d4223-1100-45ac-bded-59cedb73891c] Running
	I0826 12:32:52.566529  161529 system_pods.go:89] "storage-provisioner" [8cc84165-77c3-449f-b8f8-3d598d08dfca] Running
	I0826 12:32:52.566537  161529 system_pods.go:126] duration metric: took 205.593488ms to wait for k8s-apps to be running ...
	I0826 12:32:52.566546  161529 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:32:52.566613  161529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:32:52.584847  161529 system_svc.go:56] duration metric: took 18.289835ms WaitForService to wait for kubelet
	I0826 12:32:52.584884  161529 kubeadm.go:582] duration metric: took 29.726759511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:32:52.584908  161529 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:32:52.761605  161529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:32:52.761635  161529 node_conditions.go:123] node cpu capacity is 2
	I0826 12:32:52.761650  161529 node_conditions.go:105] duration metric: took 176.737126ms to run NodePressure ...
	I0826 12:32:52.761665  161529 start.go:241] waiting for startup goroutines ...
	I0826 12:32:52.761674  161529 start.go:246] waiting for cluster config update ...
	I0826 12:32:52.761688  161529 start.go:255] writing updated cluster config ...
	I0826 12:32:52.761985  161529 ssh_runner.go:195] Run: rm -f paused
	I0826 12:32:52.827457  161529 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:32:52.830732  161529 out.go:177] * Done! kubectl is now configured to use "calico-814705" cluster and "default" namespace by default
	I0826 12:32:48.291386  165333 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0826 12:32:48.291642  165333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:32:48.291679  165333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:32:48.309368  165333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I0826 12:32:48.309887  165333 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:32:48.310578  165333 main.go:141] libmachine: Using API Version  1
	I0826 12:32:48.310609  165333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:32:48.311037  165333 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:32:48.311270  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetMachineName
	I0826 12:32:48.311448  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .DriverName
	I0826 12:32:48.311612  165333 start.go:159] libmachine.API.Create for "enable-default-cni-814705" (driver="kvm2")
	I0826 12:32:48.311642  165333 client.go:168] LocalClient.Create starting
	I0826 12:32:48.311680  165333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem
	I0826 12:32:48.311717  165333 main.go:141] libmachine: Decoding PEM data...
	I0826 12:32:48.311734  165333 main.go:141] libmachine: Parsing certificate...
	I0826 12:32:48.311804  165333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem
	I0826 12:32:48.311837  165333 main.go:141] libmachine: Decoding PEM data...
	I0826 12:32:48.311853  165333 main.go:141] libmachine: Parsing certificate...
	I0826 12:32:48.311876  165333 main.go:141] libmachine: Running pre-create checks...
	I0826 12:32:48.311893  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .PreCreateCheck
	I0826 12:32:48.312293  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetConfigRaw
	I0826 12:32:48.312766  165333 main.go:141] libmachine: Creating machine...
	I0826 12:32:48.312785  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .Create
	I0826 12:32:48.312932  165333 main.go:141] libmachine: (enable-default-cni-814705) Creating KVM machine...
	I0826 12:32:48.314404  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found existing default KVM network
	I0826 12:32:48.316183  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:48.315964  165356 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027c220}
	I0826 12:32:48.316200  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | created network xml: 
	I0826 12:32:48.316216  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | <network>
	I0826 12:32:48.316226  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |   <name>mk-enable-default-cni-814705</name>
	I0826 12:32:48.316237  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |   <dns enable='no'/>
	I0826 12:32:48.316244  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |   
	I0826 12:32:48.316255  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0826 12:32:48.316262  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |     <dhcp>
	I0826 12:32:48.316295  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0826 12:32:48.316315  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |     </dhcp>
	I0826 12:32:48.316414  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |   </ip>
	I0826 12:32:48.316452  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG |   
	I0826 12:32:48.316468  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | </network>
	I0826 12:32:48.316480  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | 
	I0826 12:32:48.322357  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | trying to create private KVM network mk-enable-default-cni-814705 192.168.39.0/24...
	I0826 12:32:48.435596  165333 main.go:141] libmachine: (enable-default-cni-814705) Setting up store path in /home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705 ...
	I0826 12:32:48.435655  165333 main.go:141] libmachine: (enable-default-cni-814705) Building disk image from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 12:32:48.435688  165333 main.go:141] libmachine: (enable-default-cni-814705) Downloading /home/jenkins/minikube-integration/19501-99403/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0826 12:32:48.435715  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | private KVM network mk-enable-default-cni-814705 192.168.39.0/24 created
	I0826 12:32:48.435735  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:48.435495  165356 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:32:48.731075  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:48.730939  165356 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/id_rsa...
	I0826 12:32:48.791856  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:48.791699  165356 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/enable-default-cni-814705.rawdisk...
	I0826 12:32:48.791909  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Writing magic tar header
	I0826 12:32:48.791928  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Writing SSH key tar header
	I0826 12:32:48.791941  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:48.791865  165356 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705 ...
	I0826 12:32:48.792046  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705
	I0826 12:32:48.792074  165333 main.go:141] libmachine: (enable-default-cni-814705) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705 (perms=drwx------)
	I0826 12:32:48.792085  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube/machines
	I0826 12:32:48.792106  165333 main.go:141] libmachine: (enable-default-cni-814705) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube/machines (perms=drwxr-xr-x)
	I0826 12:32:48.792125  165333 main.go:141] libmachine: (enable-default-cni-814705) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403/.minikube (perms=drwxr-xr-x)
	I0826 12:32:48.792135  165333 main.go:141] libmachine: (enable-default-cni-814705) Setting executable bit set on /home/jenkins/minikube-integration/19501-99403 (perms=drwxrwxr-x)
	I0826 12:32:48.792146  165333 main.go:141] libmachine: (enable-default-cni-814705) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0826 12:32:48.792167  165333 main.go:141] libmachine: (enable-default-cni-814705) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0826 12:32:48.792179  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:32:48.792197  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19501-99403
	I0826 12:32:48.792209  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0826 12:32:48.792223  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Checking permissions on dir: /home/jenkins
	I0826 12:32:48.792232  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Checking permissions on dir: /home
	I0826 12:32:48.792248  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Skipping /home - not owner
	I0826 12:32:48.792256  165333 main.go:141] libmachine: (enable-default-cni-814705) Creating domain...
	I0826 12:32:48.793427  165333 main.go:141] libmachine: (enable-default-cni-814705) define libvirt domain using xml: 
	I0826 12:32:48.793450  165333 main.go:141] libmachine: (enable-default-cni-814705) <domain type='kvm'>
	I0826 12:32:48.793462  165333 main.go:141] libmachine: (enable-default-cni-814705)   <name>enable-default-cni-814705</name>
	I0826 12:32:48.793470  165333 main.go:141] libmachine: (enable-default-cni-814705)   <memory unit='MiB'>3072</memory>
	I0826 12:32:48.793484  165333 main.go:141] libmachine: (enable-default-cni-814705)   <vcpu>2</vcpu>
	I0826 12:32:48.793497  165333 main.go:141] libmachine: (enable-default-cni-814705)   <features>
	I0826 12:32:48.793510  165333 main.go:141] libmachine: (enable-default-cni-814705)     <acpi/>
	I0826 12:32:48.793528  165333 main.go:141] libmachine: (enable-default-cni-814705)     <apic/>
	I0826 12:32:48.793540  165333 main.go:141] libmachine: (enable-default-cni-814705)     <pae/>
	I0826 12:32:48.793556  165333 main.go:141] libmachine: (enable-default-cni-814705)     
	I0826 12:32:48.793569  165333 main.go:141] libmachine: (enable-default-cni-814705)   </features>
	I0826 12:32:48.793581  165333 main.go:141] libmachine: (enable-default-cni-814705)   <cpu mode='host-passthrough'>
	I0826 12:32:48.793593  165333 main.go:141] libmachine: (enable-default-cni-814705)   
	I0826 12:32:48.793604  165333 main.go:141] libmachine: (enable-default-cni-814705)   </cpu>
	I0826 12:32:48.793617  165333 main.go:141] libmachine: (enable-default-cni-814705)   <os>
	I0826 12:32:48.793629  165333 main.go:141] libmachine: (enable-default-cni-814705)     <type>hvm</type>
	I0826 12:32:48.793651  165333 main.go:141] libmachine: (enable-default-cni-814705)     <boot dev='cdrom'/>
	I0826 12:32:48.793663  165333 main.go:141] libmachine: (enable-default-cni-814705)     <boot dev='hd'/>
	I0826 12:32:48.793676  165333 main.go:141] libmachine: (enable-default-cni-814705)     <bootmenu enable='no'/>
	I0826 12:32:48.793684  165333 main.go:141] libmachine: (enable-default-cni-814705)   </os>
	I0826 12:32:48.793696  165333 main.go:141] libmachine: (enable-default-cni-814705)   <devices>
	I0826 12:32:48.793708  165333 main.go:141] libmachine: (enable-default-cni-814705)     <disk type='file' device='cdrom'>
	I0826 12:32:48.793727  165333 main.go:141] libmachine: (enable-default-cni-814705)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/boot2docker.iso'/>
	I0826 12:32:48.793738  165333 main.go:141] libmachine: (enable-default-cni-814705)       <target dev='hdc' bus='scsi'/>
	I0826 12:32:48.793752  165333 main.go:141] libmachine: (enable-default-cni-814705)       <readonly/>
	I0826 12:32:48.793764  165333 main.go:141] libmachine: (enable-default-cni-814705)     </disk>
	I0826 12:32:48.793776  165333 main.go:141] libmachine: (enable-default-cni-814705)     <disk type='file' device='disk'>
	I0826 12:32:48.793788  165333 main.go:141] libmachine: (enable-default-cni-814705)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0826 12:32:48.793811  165333 main.go:141] libmachine: (enable-default-cni-814705)       <source file='/home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/enable-default-cni-814705.rawdisk'/>
	I0826 12:32:48.793824  165333 main.go:141] libmachine: (enable-default-cni-814705)       <target dev='hda' bus='virtio'/>
	I0826 12:32:48.793837  165333 main.go:141] libmachine: (enable-default-cni-814705)     </disk>
	I0826 12:32:48.793848  165333 main.go:141] libmachine: (enable-default-cni-814705)     <interface type='network'>
	I0826 12:32:48.793863  165333 main.go:141] libmachine: (enable-default-cni-814705)       <source network='mk-enable-default-cni-814705'/>
	I0826 12:32:48.793873  165333 main.go:141] libmachine: (enable-default-cni-814705)       <model type='virtio'/>
	I0826 12:32:48.793883  165333 main.go:141] libmachine: (enable-default-cni-814705)     </interface>
	I0826 12:32:48.793895  165333 main.go:141] libmachine: (enable-default-cni-814705)     <interface type='network'>
	I0826 12:32:48.793908  165333 main.go:141] libmachine: (enable-default-cni-814705)       <source network='default'/>
	I0826 12:32:48.793921  165333 main.go:141] libmachine: (enable-default-cni-814705)       <model type='virtio'/>
	I0826 12:32:48.793933  165333 main.go:141] libmachine: (enable-default-cni-814705)     </interface>
	I0826 12:32:48.793944  165333 main.go:141] libmachine: (enable-default-cni-814705)     <serial type='pty'>
	I0826 12:32:48.793957  165333 main.go:141] libmachine: (enable-default-cni-814705)       <target port='0'/>
	I0826 12:32:48.793967  165333 main.go:141] libmachine: (enable-default-cni-814705)     </serial>
	I0826 12:32:48.793980  165333 main.go:141] libmachine: (enable-default-cni-814705)     <console type='pty'>
	I0826 12:32:48.793992  165333 main.go:141] libmachine: (enable-default-cni-814705)       <target type='serial' port='0'/>
	I0826 12:32:48.794005  165333 main.go:141] libmachine: (enable-default-cni-814705)     </console>
	I0826 12:32:48.794017  165333 main.go:141] libmachine: (enable-default-cni-814705)     <rng model='virtio'>
	I0826 12:32:48.794031  165333 main.go:141] libmachine: (enable-default-cni-814705)       <backend model='random'>/dev/random</backend>
	I0826 12:32:48.794042  165333 main.go:141] libmachine: (enable-default-cni-814705)     </rng>
	I0826 12:32:48.794052  165333 main.go:141] libmachine: (enable-default-cni-814705)     
	I0826 12:32:48.794063  165333 main.go:141] libmachine: (enable-default-cni-814705)     
	I0826 12:32:48.794074  165333 main.go:141] libmachine: (enable-default-cni-814705)   </devices>
	I0826 12:32:48.794084  165333 main.go:141] libmachine: (enable-default-cni-814705) </domain>
	I0826 12:32:48.794097  165333 main.go:141] libmachine: (enable-default-cni-814705) 
	I0826 12:32:48.798904  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:21:42:a0 in network default
	I0826 12:32:48.799775  165333 main.go:141] libmachine: (enable-default-cni-814705) Ensuring networks are active...
	I0826 12:32:48.799800  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:48.800613  165333 main.go:141] libmachine: (enable-default-cni-814705) Ensuring network default is active
	I0826 12:32:48.801156  165333 main.go:141] libmachine: (enable-default-cni-814705) Ensuring network mk-enable-default-cni-814705 is active
	I0826 12:32:48.801871  165333 main.go:141] libmachine: (enable-default-cni-814705) Getting domain xml...
	I0826 12:32:48.802722  165333 main.go:141] libmachine: (enable-default-cni-814705) Creating domain...
	I0826 12:32:50.162768  165333 main.go:141] libmachine: (enable-default-cni-814705) Waiting to get IP...
	I0826 12:32:50.163821  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:50.164415  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:50.164454  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:50.164370  165356 retry.go:31] will retry after 215.174472ms: waiting for machine to come up
	I0826 12:32:50.381021  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:50.381681  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:50.381706  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:50.381632  165356 retry.go:31] will retry after 390.092785ms: waiting for machine to come up
	I0826 12:32:50.773398  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:50.774086  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:50.774111  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:50.774030  165356 retry.go:31] will retry after 319.110544ms: waiting for machine to come up
	I0826 12:32:51.094559  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:51.095264  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:51.095292  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:51.095221  165356 retry.go:31] will retry after 473.172624ms: waiting for machine to come up
	I0826 12:32:51.570732  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:51.571376  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:51.571404  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:51.571317  165356 retry.go:31] will retry after 598.216627ms: waiting for machine to come up
	I0826 12:32:52.171161  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:52.171756  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:52.171787  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:52.171721  165356 retry.go:31] will retry after 683.458705ms: waiting for machine to come up
	I0826 12:32:52.856519  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:52.857040  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:52.857077  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:52.857007  165356 retry.go:31] will retry after 1.023275332s: waiting for machine to come up
	I0826 12:32:55.160090  163599 kubeadm.go:310] [api-check] The API server is healthy after 5.002530807s
	I0826 12:32:55.176777  163599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:32:55.195636  163599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:32:55.240769  163599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:32:55.241062  163599 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-814705 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:32:55.257930  163599 kubeadm.go:310] [bootstrap-token] Using token: y26qp6.aprgd19zgizzyaij
	I0826 12:32:55.259803  163599 out.go:235]   - Configuring RBAC rules ...
	I0826 12:32:55.260005  163599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:32:55.268409  163599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:32:55.278648  163599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:32:55.283711  163599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:32:55.288709  163599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:32:55.296727  163599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:32:55.570785  163599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:32:56.012646  163599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:32:56.571509  163599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:32:56.571538  163599 kubeadm.go:310] 
	I0826 12:32:56.571628  163599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:32:56.571643  163599 kubeadm.go:310] 
	I0826 12:32:56.571770  163599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:32:56.571781  163599 kubeadm.go:310] 
	I0826 12:32:56.571816  163599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:32:56.571898  163599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:32:56.571973  163599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:32:56.571983  163599 kubeadm.go:310] 
	I0826 12:32:56.572110  163599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:32:56.572137  163599 kubeadm.go:310] 
	I0826 12:32:56.572207  163599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:32:56.572222  163599 kubeadm.go:310] 
	I0826 12:32:56.572296  163599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:32:56.572405  163599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:32:56.572511  163599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:32:56.572521  163599 kubeadm.go:310] 
	I0826 12:32:56.572650  163599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:32:56.572765  163599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:32:56.572785  163599 kubeadm.go:310] 
	I0826 12:32:56.572907  163599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y26qp6.aprgd19zgizzyaij \
	I0826 12:32:56.573048  163599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:32:56.573079  163599 kubeadm.go:310] 	--control-plane 
	I0826 12:32:56.573094  163599 kubeadm.go:310] 
	I0826 12:32:56.573215  163599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:32:56.573224  163599 kubeadm.go:310] 
	I0826 12:32:56.573374  163599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y26qp6.aprgd19zgizzyaij \
	I0826 12:32:56.573501  163599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:32:56.574287  163599 kubeadm.go:310] W0826 12:32:45.163540     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:32:56.574658  163599 kubeadm.go:310] W0826 12:32:45.164999     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:32:56.574786  163599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:32:56.574817  163599 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0826 12:32:56.576446  163599 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0826 12:32:53.881684  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:53.882498  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:53.882522  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:53.882389  165356 retry.go:31] will retry after 1.32519822s: waiting for machine to come up
	I0826 12:32:55.209010  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:55.209739  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:55.209779  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:55.209641  165356 retry.go:31] will retry after 1.453700321s: waiting for machine to come up
	I0826 12:32:56.664637  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:56.665403  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:56.665444  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:56.665346  165356 retry.go:31] will retry after 1.938375102s: waiting for machine to come up
	I0826 12:32:56.577723  163599 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0826 12:32:56.577791  163599 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0826 12:32:56.583430  163599 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0826 12:32:56.583470  163599 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0826 12:32:56.619918  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0826 12:32:57.065669  163599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:32:57.065768  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:32:57.065792  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-814705 minikube.k8s.io/updated_at=2024_08_26T12_32_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=custom-flannel-814705 minikube.k8s.io/primary=true
	I0826 12:32:57.227489  163599 ops.go:34] apiserver oom_adj: -16
	I0826 12:32:57.227662  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:32:57.728005  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:32:58.228078  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:32:58.728109  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:32:59.228177  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:32:59.728281  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:33:00.228070  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:33:00.728033  163599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:33:00.892075  163599 kubeadm.go:1113] duration metric: took 3.826376s to wait for elevateKubeSystemPrivileges
	I0826 12:33:00.892115  163599 kubeadm.go:394] duration metric: took 15.953598781s to StartCluster
	I0826 12:33:00.892142  163599 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:33:00.892236  163599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:33:00.894062  163599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:33:00.894323  163599 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:33:00.894429  163599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0826 12:33:00.894529  163599 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:33:00.894575  163599 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-814705"
	I0826 12:33:00.894598  163599 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-814705"
	I0826 12:33:00.894622  163599 host.go:66] Checking if "custom-flannel-814705" exists ...
	I0826 12:33:00.894717  163599 config.go:182] Loaded profile config "custom-flannel-814705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:33:00.894788  163599 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-814705"
	I0826 12:33:00.894827  163599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-814705"
	I0826 12:33:00.895115  163599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:33:00.895158  163599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:33:00.895221  163599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:33:00.895259  163599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:33:00.896784  163599 out.go:177] * Verifying Kubernetes components...
	I0826 12:33:00.900290  163599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:33:00.916039  163599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0826 12:33:00.916602  163599 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:33:00.917194  163599 main.go:141] libmachine: Using API Version  1
	I0826 12:33:00.917216  163599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:33:00.917303  163599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0826 12:33:00.917572  163599 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:33:00.917705  163599 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:33:00.917796  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetState
	I0826 12:33:00.918180  163599 main.go:141] libmachine: Using API Version  1
	I0826 12:33:00.918198  163599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:33:00.918894  163599 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:33:00.919765  163599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:33:00.919793  163599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:33:00.921983  163599 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-814705"
	I0826 12:33:00.922032  163599 host.go:66] Checking if "custom-flannel-814705" exists ...
	I0826 12:33:00.922434  163599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:33:00.922477  163599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:33:00.945761  163599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0826 12:33:00.946455  163599 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:33:00.947172  163599 main.go:141] libmachine: Using API Version  1
	I0826 12:33:00.947200  163599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:33:00.947589  163599 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:33:00.948274  163599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:33:00.948308  163599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:33:00.950099  163599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0826 12:33:00.950698  163599 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:33:00.951268  163599 main.go:141] libmachine: Using API Version  1
	I0826 12:33:00.951290  163599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:33:00.951636  163599 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:33:00.951784  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetState
	I0826 12:33:00.953721  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .DriverName
	I0826 12:33:00.955385  163599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:33:00.957108  163599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:33:00.957135  163599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:33:00.957162  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetSSHHostname
	I0826 12:33:00.961216  163599 main.go:141] libmachine: (custom-flannel-814705) DBG | domain custom-flannel-814705 has defined MAC address 52:54:00:01:3d:bf in network mk-custom-flannel-814705
	I0826 12:33:00.961720  163599 main.go:141] libmachine: (custom-flannel-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:3d:bf", ip: ""} in network mk-custom-flannel-814705: {Iface:virbr4 ExpiryTime:2024-08-26 13:32:26 +0000 UTC Type:0 Mac:52:54:00:01:3d:bf Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:custom-flannel-814705 Clientid:01:52:54:00:01:3d:bf}
	I0826 12:33:00.961752  163599 main.go:141] libmachine: (custom-flannel-814705) DBG | domain custom-flannel-814705 has defined IP address 192.168.72.43 and MAC address 52:54:00:01:3d:bf in network mk-custom-flannel-814705
	I0826 12:33:00.961952  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetSSHPort
	I0826 12:33:00.962139  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetSSHKeyPath
	I0826 12:33:00.962271  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetSSHUsername
	I0826 12:33:00.962396  163599 sshutil.go:53] new ssh client: &{IP:192.168.72.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/custom-flannel-814705/id_rsa Username:docker}
	I0826 12:33:00.974077  163599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0826 12:33:00.974732  163599 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:33:00.975451  163599 main.go:141] libmachine: Using API Version  1
	I0826 12:33:00.975480  163599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:33:00.979417  163599 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:33:00.979730  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetState
	I0826 12:33:00.982252  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .DriverName
	I0826 12:33:00.982550  163599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:33:00.982566  163599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:33:00.982588  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetSSHHostname
	I0826 12:33:00.986328  163599 main.go:141] libmachine: (custom-flannel-814705) DBG | domain custom-flannel-814705 has defined MAC address 52:54:00:01:3d:bf in network mk-custom-flannel-814705
	I0826 12:33:00.986803  163599 main.go:141] libmachine: (custom-flannel-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:3d:bf", ip: ""} in network mk-custom-flannel-814705: {Iface:virbr4 ExpiryTime:2024-08-26 13:32:26 +0000 UTC Type:0 Mac:52:54:00:01:3d:bf Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:custom-flannel-814705 Clientid:01:52:54:00:01:3d:bf}
	I0826 12:33:00.986826  163599 main.go:141] libmachine: (custom-flannel-814705) DBG | domain custom-flannel-814705 has defined IP address 192.168.72.43 and MAC address 52:54:00:01:3d:bf in network mk-custom-flannel-814705
	I0826 12:33:00.987198  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetSSHPort
	I0826 12:33:00.987424  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetSSHKeyPath
	I0826 12:33:00.987635  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .GetSSHUsername
	I0826 12:33:00.987852  163599 sshutil.go:53] new ssh client: &{IP:192.168.72.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/custom-flannel-814705/id_rsa Username:docker}
	I0826 12:33:01.332431  163599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0826 12:33:01.332578  163599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:33:01.435073  163599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:33:01.449653  163599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:33:01.914582  163599 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0826 12:33:01.917805  163599 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-814705" to be "Ready" ...
	I0826 12:33:01.918122  163599 main.go:141] libmachine: Making call to close driver server
	I0826 12:33:01.918148  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .Close
	I0826 12:33:01.918472  163599 main.go:141] libmachine: (custom-flannel-814705) DBG | Closing plugin on server side
	I0826 12:33:01.918861  163599 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:33:01.918882  163599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:33:01.918892  163599 main.go:141] libmachine: Making call to close driver server
	I0826 12:33:01.918909  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .Close
	I0826 12:33:01.919243  163599 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:33:01.919256  163599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:33:01.958998  163599 main.go:141] libmachine: Making call to close driver server
	I0826 12:33:01.959032  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .Close
	I0826 12:33:01.959497  163599 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:33:01.959562  163599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:33:02.308886  163599 main.go:141] libmachine: Making call to close driver server
	I0826 12:33:02.308912  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .Close
	I0826 12:33:02.309326  163599 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:33:02.309341  163599 main.go:141] libmachine: (custom-flannel-814705) DBG | Closing plugin on server side
	I0826 12:33:02.309345  163599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:33:02.309385  163599 main.go:141] libmachine: Making call to close driver server
	I0826 12:33:02.309397  163599 main.go:141] libmachine: (custom-flannel-814705) Calling .Close
	I0826 12:33:02.309727  163599 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:33:02.309776  163599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:33:02.312539  163599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0826 12:32:58.605057  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:32:58.605519  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:32:58.605549  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:32:58.605458  165356 retry.go:31] will retry after 2.5017785s: waiting for machine to come up
	I0826 12:33:01.109185  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:01.109682  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:33:01.109707  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:33:01.109616  165356 retry.go:31] will retry after 3.481764166s: waiting for machine to come up
	I0826 12:33:02.314082  163599 addons.go:510] duration metric: took 1.419547626s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0826 12:33:02.420575  163599 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-814705" context rescaled to 1 replicas
	I0826 12:33:04.129442  163599 node_ready.go:53] node "custom-flannel-814705" has status "Ready":"False"
	I0826 12:33:04.593173  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:04.593712  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:33:04.593737  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:33:04.593660  165356 retry.go:31] will retry after 3.178107623s: waiting for machine to come up
	I0826 12:33:07.776005  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:07.776615  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find current IP address of domain enable-default-cni-814705 in network mk-enable-default-cni-814705
	I0826 12:33:07.776647  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | I0826 12:33:07.776564  165356 retry.go:31] will retry after 5.632829813s: waiting for machine to come up
	I0826 12:33:06.423807  163599 node_ready.go:53] node "custom-flannel-814705" has status "Ready":"False"
	I0826 12:33:08.922817  163599 node_ready.go:53] node "custom-flannel-814705" has status "Ready":"False"
	I0826 12:33:09.425889  163599 node_ready.go:49] node "custom-flannel-814705" has status "Ready":"True"
	I0826 12:33:09.425918  163599 node_ready.go:38] duration metric: took 7.508066473s for node "custom-flannel-814705" to be "Ready" ...
	I0826 12:33:09.425928  163599 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:33:09.434564  163599 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-9jlg8" in "kube-system" namespace to be "Ready" ...
	I0826 12:33:11.441701  163599 pod_ready.go:103] pod "coredns-6f6b679f8f-9jlg8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:33:13.941241  163599 pod_ready.go:103] pod "coredns-6f6b679f8f-9jlg8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:33:13.411042  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.411600  165333 main.go:141] libmachine: (enable-default-cni-814705) Found IP for machine: 192.168.39.96
	I0826 12:33:13.411627  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has current primary IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.411643  165333 main.go:141] libmachine: (enable-default-cni-814705) Reserving static IP address...
	I0826 12:33:13.412078  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-814705", mac: "52:54:00:26:78:b6", ip: "192.168.39.96"} in network mk-enable-default-cni-814705
	I0826 12:33:13.501723  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Getting to WaitForSSH function...
	I0826 12:33:13.501755  165333 main.go:141] libmachine: (enable-default-cni-814705) Reserved static IP address: 192.168.39.96
	I0826 12:33:13.501810  165333 main.go:141] libmachine: (enable-default-cni-814705) Waiting for SSH to be available...
	I0826 12:33:13.504394  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.504866  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:minikube Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:13.504899  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.505050  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Using SSH client type: external
	I0826 12:33:13.505088  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/id_rsa (-rw-------)
	I0826 12:33:13.505140  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:33:13.505159  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | About to run SSH command:
	I0826 12:33:13.505173  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | exit 0
	I0826 12:33:13.635384  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | SSH cmd err, output: <nil>: 
	I0826 12:33:13.635699  165333 main.go:141] libmachine: (enable-default-cni-814705) KVM machine creation complete!
	I0826 12:33:13.636122  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetConfigRaw
	I0826 12:33:13.636690  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .DriverName
	I0826 12:33:13.636940  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .DriverName
	I0826 12:33:13.637170  165333 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0826 12:33:13.637187  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetState
	I0826 12:33:13.638878  165333 main.go:141] libmachine: Detecting operating system of created instance...
	I0826 12:33:13.638900  165333 main.go:141] libmachine: Waiting for SSH to be available...
	I0826 12:33:13.638910  165333 main.go:141] libmachine: Getting to WaitForSSH function...
	I0826 12:33:13.638920  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:13.643032  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.643466  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:13.643501  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.643627  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:13.643840  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:13.644029  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:13.644189  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:13.644357  165333 main.go:141] libmachine: Using SSH client type: native
	I0826 12:33:13.644607  165333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0826 12:33:13.644621  165333 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0826 12:33:13.758585  165333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:33:13.758635  165333 main.go:141] libmachine: Detecting the provisioner...
	I0826 12:33:13.758648  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:13.762088  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.762499  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:13.762530  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.762814  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:13.763143  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:13.763403  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:13.763577  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:13.763796  165333 main.go:141] libmachine: Using SSH client type: native
	I0826 12:33:13.763978  165333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0826 12:33:13.763989  165333 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0826 12:33:13.875345  165333 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0826 12:33:13.875466  165333 main.go:141] libmachine: found compatible host: buildroot
	I0826 12:33:13.875479  165333 main.go:141] libmachine: Provisioning with buildroot...
	I0826 12:33:13.875494  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetMachineName
	I0826 12:33:13.875789  165333 buildroot.go:166] provisioning hostname "enable-default-cni-814705"
	I0826 12:33:13.875823  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetMachineName
	I0826 12:33:13.876056  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:13.879205  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.879647  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:13.879685  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:13.879822  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:13.880019  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:13.880341  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:13.880517  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:13.880729  165333 main.go:141] libmachine: Using SSH client type: native
	I0826 12:33:13.880908  165333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0826 12:33:13.880921  165333 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-814705 && echo "enable-default-cni-814705" | sudo tee /etc/hostname
	I0826 12:33:14.010506  165333 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-814705
	
	I0826 12:33:14.010539  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:14.014975  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.015489  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.015531  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.015792  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:14.016055  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:14.016286  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:14.016499  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:14.016671  165333 main.go:141] libmachine: Using SSH client type: native
	I0826 12:33:14.016840  165333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0826 12:33:14.016857  165333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-814705' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-814705/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-814705' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:33:14.141058  165333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:33:14.141094  165333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:33:14.141149  165333 buildroot.go:174] setting up certificates
	I0826 12:33:14.141161  165333 provision.go:84] configureAuth start
	I0826 12:33:14.141172  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetMachineName
	I0826 12:33:14.141544  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetIP
	I0826 12:33:14.144763  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.145223  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.145256  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.145447  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:14.148165  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.148570  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.148596  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.148834  165333 provision.go:143] copyHostCerts
	I0826 12:33:14.148898  165333 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:33:14.148917  165333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:33:14.148994  165333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:33:14.149093  165333 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:33:14.149102  165333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:33:14.149140  165333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:33:14.149203  165333 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:33:14.149213  165333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:33:14.149245  165333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:33:14.149308  165333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-814705 san=[127.0.0.1 192.168.39.96 enable-default-cni-814705 localhost minikube]
	I0826 12:33:14.367011  165333 provision.go:177] copyRemoteCerts
	I0826 12:33:14.367072  165333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:33:14.367096  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:14.370075  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.370498  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.370528  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.370795  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:14.371064  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:14.371239  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:14.371398  165333 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/id_rsa Username:docker}
	I0826 12:33:14.458015  165333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:33:14.482806  165333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0826 12:33:14.506755  165333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:33:14.531111  165333 provision.go:87] duration metric: took 389.932957ms to configureAuth
	I0826 12:33:14.531145  165333 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:33:14.531361  165333 config.go:182] Loaded profile config "enable-default-cni-814705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:33:14.531466  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:14.534319  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.534695  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.534726  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.534917  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:14.535173  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:14.535365  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:14.535512  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:14.535711  165333 main.go:141] libmachine: Using SSH client type: native
	I0826 12:33:14.535882  165333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0826 12:33:14.535898  165333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:33:14.822385  165333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:33:14.822419  165333 main.go:141] libmachine: Checking connection to Docker...
	I0826 12:33:14.822431  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetURL
	I0826 12:33:14.823923  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | Using libvirt version 6000000
	I0826 12:33:14.826617  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.827045  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.827078  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.827394  165333 main.go:141] libmachine: Docker is up and running!
	I0826 12:33:14.827423  165333 main.go:141] libmachine: Reticulating splines...
	I0826 12:33:14.827434  165333 client.go:171] duration metric: took 26.515778948s to LocalClient.Create
	I0826 12:33:14.827460  165333 start.go:167] duration metric: took 26.515847954s to libmachine.API.Create "enable-default-cni-814705"
	I0826 12:33:14.827472  165333 start.go:293] postStartSetup for "enable-default-cni-814705" (driver="kvm2")
	I0826 12:33:14.827485  165333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:33:14.827510  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .DriverName
	I0826 12:33:14.827799  165333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:33:14.827835  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:14.830389  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.830775  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.830804  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.831000  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:14.831199  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:14.831375  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:14.831520  165333 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/id_rsa Username:docker}
	I0826 12:33:14.921426  165333 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:33:14.926231  165333 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:33:14.926260  165333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:33:14.926348  165333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:33:14.926459  165333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:33:14.926593  165333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:33:14.937165  165333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:33:14.962691  165333 start.go:296] duration metric: took 135.200714ms for postStartSetup
	I0826 12:33:14.962761  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetConfigRaw
	I0826 12:33:14.963483  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetIP
	I0826 12:33:14.966430  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.966784  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.966814  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.967128  165333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/enable-default-cni-814705/config.json ...
	I0826 12:33:14.967385  165333 start.go:128] duration metric: took 26.678853119s to createHost
	I0826 12:33:14.967419  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:14.970012  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.970293  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:14.970322  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:14.970534  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:14.970791  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:14.971040  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:14.971188  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:14.971382  165333 main.go:141] libmachine: Using SSH client type: native
	I0826 12:33:14.971597  165333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0826 12:33:14.971616  165333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:33:15.087556  165333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724675595.057174987
	
	I0826 12:33:15.087582  165333 fix.go:216] guest clock: 1724675595.057174987
	I0826 12:33:15.087594  165333 fix.go:229] Guest: 2024-08-26 12:33:15.057174987 +0000 UTC Remote: 2024-08-26 12:33:14.967402337 +0000 UTC m=+26.806419998 (delta=89.77265ms)
	I0826 12:33:15.087615  165333 fix.go:200] guest clock delta is within tolerance: 89.77265ms
	I0826 12:33:15.087620  165333 start.go:83] releasing machines lock for "enable-default-cni-814705", held for 26.799204238s
	I0826 12:33:15.087641  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .DriverName
	I0826 12:33:15.087971  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetIP
	I0826 12:33:15.090907  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:15.091319  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:15.091341  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:15.091610  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .DriverName
	I0826 12:33:15.092313  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .DriverName
	I0826 12:33:15.092534  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .DriverName
	I0826 12:33:15.092628  165333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:33:15.092681  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:15.092811  165333 ssh_runner.go:195] Run: cat /version.json
	I0826 12:33:15.092839  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHHostname
	I0826 12:33:15.095888  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:15.096117  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:15.096287  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:15.096320  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:15.096548  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:15.096664  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:15.096690  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:15.096756  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:15.096897  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHPort
	I0826 12:33:15.097119  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHKeyPath
	I0826 12:33:15.097135  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:15.097286  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetSSHUsername
	I0826 12:33:15.097367  165333 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/id_rsa Username:docker}
	I0826 12:33:15.097440  165333 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/enable-default-cni-814705/id_rsa Username:docker}
	I0826 12:33:15.180081  165333 ssh_runner.go:195] Run: systemctl --version
	I0826 12:33:15.218979  165333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:33:15.384783  165333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:33:15.391270  165333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:33:15.391341  165333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:33:15.410703  165333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:33:15.410737  165333 start.go:495] detecting cgroup driver to use...
	I0826 12:33:15.410821  165333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:33:15.428908  165333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:33:15.446984  165333 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:33:15.447063  165333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:33:15.465313  165333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:33:15.481909  165333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:33:15.612399  165333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:33:15.777759  165333 docker.go:233] disabling docker service ...
	I0826 12:33:15.777821  165333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:33:15.793032  165333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:33:15.807495  165333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:33:15.923677  165333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:33:16.043778  165333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:33:16.058939  165333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:33:16.080090  165333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:33:16.080173  165333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:33:16.092010  165333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:33:16.092099  165333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:33:16.103217  165333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:33:16.115029  165333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:33:16.125854  165333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:33:16.138670  165333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:33:16.153657  165333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:33:16.174763  165333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:33:16.190320  165333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:33:16.203801  165333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:33:16.203881  165333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:33:16.218691  165333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:33:16.229242  165333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:33:16.356349  165333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:33:16.526328  165333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:33:16.526415  165333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:33:16.531572  165333 start.go:563] Will wait 60s for crictl version
	I0826 12:33:16.531663  165333 ssh_runner.go:195] Run: which crictl
	I0826 12:33:16.537001  165333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:33:16.592723  165333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:33:16.592824  165333 ssh_runner.go:195] Run: crio --version
	I0826 12:33:16.632274  165333 ssh_runner.go:195] Run: crio --version
	I0826 12:33:16.672290  165333 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:33:16.673731  165333 main.go:141] libmachine: (enable-default-cni-814705) Calling .GetIP
	I0826 12:33:16.677133  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:16.677610  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:78:b6", ip: ""} in network mk-enable-default-cni-814705: {Iface:virbr1 ExpiryTime:2024-08-26 13:33:03 +0000 UTC Type:0 Mac:52:54:00:26:78:b6 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:enable-default-cni-814705 Clientid:01:52:54:00:26:78:b6}
	I0826 12:33:16.677643  165333 main.go:141] libmachine: (enable-default-cni-814705) DBG | domain enable-default-cni-814705 has defined IP address 192.168.39.96 and MAC address 52:54:00:26:78:b6 in network mk-enable-default-cni-814705
	I0826 12:33:16.677927  165333 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 12:33:16.683031  165333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:33:16.696608  165333 kubeadm.go:883] updating cluster {Name:enable-default-cni-814705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:enable-default-cni-814705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:33:16.696757  165333 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:33:16.696841  165333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:33:16.737578  165333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:33:16.737651  165333 ssh_runner.go:195] Run: which lz4
	I0826 12:33:16.742975  165333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:33:16.748612  165333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:33:16.748663  165333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:33:18.039968  165333 crio.go:462] duration metric: took 1.297043288s to copy over tarball
	I0826 12:33:18.040038  165333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.130486225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675600130457849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b4c557a-52b6-46ea-9ed8-1a3da3a16588 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.131510592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67e80607-6485-48bc-9939-94637823711f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.131706150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67e80607-6485-48bc-9939-94637823711f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.132180025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185,PodSandboxId:9efb1b4d46bb7eabcef58dd080fd3e1bba40da9d97296bb8e3a366507aacde86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674517831634941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3becb878-fd98-4476-9c05-cfb6260d2e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b,PodSandboxId:a65a74e8752e2679140bc4490f32b9df38757be45795b57c5c78052b9fa9ce9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517313724578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mg7dz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d15394d-faa4-4bee-a118-346247df5600,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23,PodSandboxId:61b09c1e488a319a0fece89f14a27f5ba4552925694384de467f27befbdc8473,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517069913117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9tm7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5aa79a64-1ea3-4734-99cf-70ea69b3fce3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654,PodSandboxId:c11c96971b2c6f283354e5f72eb50967311de67eba9efe0bd1314116595b49d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724674516508505500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkklg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337f5f37-fc3a-45fc-83f0-def91ba4c7af,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4,PodSandboxId:f0c55c67a268204fd48ba3a328cad0a76401ee476a4fff6f4e6b136e66095433,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674505570805116,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e31ae599fe347d3d9295fc494d8ea5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5,PodSandboxId:12f714b572f38470087dc20ebc18edfc101eceee6939579975531149bab5db83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674505603116638,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 927d6abd0aec67a446f5f2e98dd2b53d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386,PodSandboxId:31c5e141f3742343ca4623125655b50f462d58084c5d37c54403ba63cc8db8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674505514487399,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22,PodSandboxId:cda189c36b7ea2432f12a280c88fde5ff78ffbcd6d3ebb0540d2c7c47022b2e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674505448649126,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198cf46b0a0eb15961809ad9ae53f6d3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7,PodSandboxId:274fd81f46af534db23355a51ea573195b3cbd9f5db77e3f61033b1535ec3492,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674220010246935,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67e80607-6485-48bc-9939-94637823711f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.178501936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af01b385-ebe9-49fc-80fd-387cef9bd440 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.178680285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af01b385-ebe9-49fc-80fd-387cef9bd440 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.180374690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1103412c-7e28-49e4-bf3c-87416698e3c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.180944616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675600180913044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1103412c-7e28-49e4-bf3c-87416698e3c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.181919500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bba4dbce-c5bd-4932-99aa-9c0f91cea88c name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.182168481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bba4dbce-c5bd-4932-99aa-9c0f91cea88c name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.182475183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185,PodSandboxId:9efb1b4d46bb7eabcef58dd080fd3e1bba40da9d97296bb8e3a366507aacde86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674517831634941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3becb878-fd98-4476-9c05-cfb6260d2e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b,PodSandboxId:a65a74e8752e2679140bc4490f32b9df38757be45795b57c5c78052b9fa9ce9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517313724578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mg7dz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d15394d-faa4-4bee-a118-346247df5600,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23,PodSandboxId:61b09c1e488a319a0fece89f14a27f5ba4552925694384de467f27befbdc8473,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517069913117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9tm7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5aa79a64-1ea3-4734-99cf-70ea69b3fce3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654,PodSandboxId:c11c96971b2c6f283354e5f72eb50967311de67eba9efe0bd1314116595b49d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724674516508505500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkklg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337f5f37-fc3a-45fc-83f0-def91ba4c7af,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4,PodSandboxId:f0c55c67a268204fd48ba3a328cad0a76401ee476a4fff6f4e6b136e66095433,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674505570805116,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e31ae599fe347d3d9295fc494d8ea5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5,PodSandboxId:12f714b572f38470087dc20ebc18edfc101eceee6939579975531149bab5db83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674505603116638,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 927d6abd0aec67a446f5f2e98dd2b53d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386,PodSandboxId:31c5e141f3742343ca4623125655b50f462d58084c5d37c54403ba63cc8db8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674505514487399,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22,PodSandboxId:cda189c36b7ea2432f12a280c88fde5ff78ffbcd6d3ebb0540d2c7c47022b2e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674505448649126,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198cf46b0a0eb15961809ad9ae53f6d3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7,PodSandboxId:274fd81f46af534db23355a51ea573195b3cbd9f5db77e3f61033b1535ec3492,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674220010246935,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bba4dbce-c5bd-4932-99aa-9c0f91cea88c name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.229896322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6c3cce8-bd28-40d3-8cf8-7e18de4bc924 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.230007229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6c3cce8-bd28-40d3-8cf8-7e18de4bc924 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.232333747Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21bb99c8-023e-4581-a5ea-a22e775b819a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.232941920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675600232902005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21bb99c8-023e-4581-a5ea-a22e775b819a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.233990583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bafe7e7-7211-455a-ad81-ddb90a545728 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.234162312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bafe7e7-7211-455a-ad81-ddb90a545728 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.234485333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185,PodSandboxId:9efb1b4d46bb7eabcef58dd080fd3e1bba40da9d97296bb8e3a366507aacde86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674517831634941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3becb878-fd98-4476-9c05-cfb6260d2e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b,PodSandboxId:a65a74e8752e2679140bc4490f32b9df38757be45795b57c5c78052b9fa9ce9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517313724578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mg7dz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d15394d-faa4-4bee-a118-346247df5600,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23,PodSandboxId:61b09c1e488a319a0fece89f14a27f5ba4552925694384de467f27befbdc8473,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517069913117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9tm7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5aa79a64-1ea3-4734-99cf-70ea69b3fce3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654,PodSandboxId:c11c96971b2c6f283354e5f72eb50967311de67eba9efe0bd1314116595b49d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724674516508505500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkklg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337f5f37-fc3a-45fc-83f0-def91ba4c7af,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4,PodSandboxId:f0c55c67a268204fd48ba3a328cad0a76401ee476a4fff6f4e6b136e66095433,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674505570805116,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e31ae599fe347d3d9295fc494d8ea5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5,PodSandboxId:12f714b572f38470087dc20ebc18edfc101eceee6939579975531149bab5db83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674505603116638,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 927d6abd0aec67a446f5f2e98dd2b53d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386,PodSandboxId:31c5e141f3742343ca4623125655b50f462d58084c5d37c54403ba63cc8db8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674505514487399,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22,PodSandboxId:cda189c36b7ea2432f12a280c88fde5ff78ffbcd6d3ebb0540d2c7c47022b2e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674505448649126,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198cf46b0a0eb15961809ad9ae53f6d3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7,PodSandboxId:274fd81f46af534db23355a51ea573195b3cbd9f5db77e3f61033b1535ec3492,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674220010246935,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bafe7e7-7211-455a-ad81-ddb90a545728 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.289935093Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdbf3064-37a8-4647-a054-d01d1d2c382b name=/runtime.v1.RuntimeService/Version
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.290047610Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdbf3064-37a8-4647-a054-d01d1d2c382b name=/runtime.v1.RuntimeService/Version
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.291712952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53a04c71-d848-488e-af9d-e4706d59d8a3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.292506167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675600292470124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53a04c71-d848-488e-af9d-e4706d59d8a3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.294794469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08addad5-0705-4f4e-b91a-37954e73d521 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.294892820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08addad5-0705-4f4e-b91a-37954e73d521 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:33:20 default-k8s-diff-port-697869 crio[729]: time="2024-08-26 12:33:20.295255497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185,PodSandboxId:9efb1b4d46bb7eabcef58dd080fd3e1bba40da9d97296bb8e3a366507aacde86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674517831634941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3becb878-fd98-4476-9c05-cfb6260d2e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b,PodSandboxId:a65a74e8752e2679140bc4490f32b9df38757be45795b57c5c78052b9fa9ce9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517313724578,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mg7dz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d15394d-faa4-4bee-a118-346247df5600,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23,PodSandboxId:61b09c1e488a319a0fece89f14a27f5ba4552925694384de467f27befbdc8473,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674517069913117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9tm7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5aa79a64-1ea3-4734-99cf-70ea69b3fce3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654,PodSandboxId:c11c96971b2c6f283354e5f72eb50967311de67eba9efe0bd1314116595b49d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724674516508505500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkklg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 337f5f37-fc3a-45fc-83f0-def91ba4c7af,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4,PodSandboxId:f0c55c67a268204fd48ba3a328cad0a76401ee476a4fff6f4e6b136e66095433,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674505570805116,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e31ae599fe347d3d9295fc494d8ea5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5,PodSandboxId:12f714b572f38470087dc20ebc18edfc101eceee6939579975531149bab5db83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674505603116638,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 927d6abd0aec67a446f5f2e98dd2b53d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386,PodSandboxId:31c5e141f3742343ca4623125655b50f462d58084c5d37c54403ba63cc8db8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674505514487399,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22,PodSandboxId:cda189c36b7ea2432f12a280c88fde5ff78ffbcd6d3ebb0540d2c7c47022b2e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674505448649126,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198cf46b0a0eb15961809ad9ae53f6d3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7,PodSandboxId:274fd81f46af534db23355a51ea573195b3cbd9f5db77e3f61033b1535ec3492,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674220010246935,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-697869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 989b4f97821d727ff7da09d58d81fca4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08addad5-0705-4f4e-b91a-37954e73d521 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	270d1832bad4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   9efb1b4d46bb7       storage-provisioner
	cdb2469bb6273       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   a65a74e8752e2       coredns-6f6b679f8f-mg7dz
	150f52d25ef12       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   61b09c1e488a3       coredns-6f6b679f8f-9tm7v
	db02b9eeafe0b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   18 minutes ago      Running             kube-proxy                0                   c11c96971b2c6       kube-proxy-fkklg
	e74ae7c401295       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   18 minutes ago      Running             kube-controller-manager   2                   12f714b572f38       kube-controller-manager-default-k8s-diff-port-697869
	e5e6f98951857       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   18 minutes ago      Running             etcd                      2                   f0c55c67a2682       etcd-default-k8s-diff-port-697869
	14a06fb6265b2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   18 minutes ago      Running             kube-apiserver            2                   31c5e141f3742       kube-apiserver-default-k8s-diff-port-697869
	db6eabb03fe18       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   18 minutes ago      Running             kube-scheduler            2                   cda189c36b7ea       kube-scheduler-default-k8s-diff-port-697869
	8a8ee2b12fd33       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 minutes ago      Exited              kube-apiserver            1                   274fd81f46af5       kube-apiserver-default-k8s-diff-port-697869
	
	
	==> coredns [150f52d25ef129ec5fd4f8946b4f5be19942a04940e06f3428e0341ca5e2ad23] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cdb2469bb6273044d15c145b01e30095a44a1dc23a45f288543a88d6453b680b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-697869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-697869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=default-k8s-diff-port-697869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T12_15_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 12:15:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-697869
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:33:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:30:39 +0000   Mon, 26 Aug 2024 12:15:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:30:39 +0000   Mon, 26 Aug 2024 12:15:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:30:39 +0000   Mon, 26 Aug 2024 12:15:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:30:39 +0000   Mon, 26 Aug 2024 12:15:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.11
	  Hostname:    default-k8s-diff-port-697869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8ef518bda4d40419144f742d287dfbe
	  System UUID:                a8ef518b-da4d-4041-9144-f742d287dfbe
	  Boot ID:                    530fedb0-7883-43c7-9333-889ed0d8b04a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-9tm7v                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 coredns-6f6b679f8f-mg7dz                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-default-k8s-diff-port-697869                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-697869             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-697869    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-fkklg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-697869             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-6867b74b74-7d2qs                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         18m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node default-k8s-diff-port-697869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node default-k8s-diff-port-697869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node default-k8s-diff-port-697869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node default-k8s-diff-port-697869 event: Registered Node default-k8s-diff-port-697869 in Controller
	
	
	==> dmesg <==
	[  +0.041881] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug26 12:10] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.995076] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.561352] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.047844] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.060144] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059269] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.188746] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.147666] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.301793] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.358766] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.064825] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.871859] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +4.563424] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.324050] kauditd_printk_skb: 59 callbacks suppressed
	[Aug26 12:14] kauditd_printk_skb: 31 callbacks suppressed
	[Aug26 12:15] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.412279] systemd-fstab-generator[2554]: Ignoring "noauto" option for root device
	[  +4.479661] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.583979] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +5.438837] systemd-fstab-generator[3006]: Ignoring "noauto" option for root device
	[  +0.145032] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.327567] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [e5e6f98951857755bf5dbe71599309bee383dfe5d21e9171566c5152f57656e4] <==
	{"level":"info","ts":"2024-08-26T12:25:06.367186Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3786207609,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-08-26T12:29:27.642566Z","caller":"traceutil/trace.go:171","msg":"trace[1125269795] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"418.420303ms","start":"2024-08-26T12:29:27.224112Z","end":"2024-08-26T12:29:27.642533Z","steps":["trace[1125269795] 'process raft request'  (duration: 418.174251ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T12:29:27.644514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-26T12:29:27.224084Z","time spent":"418.643621ms","remote":"127.0.0.1:35554","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":600,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-697869\" mod_revision:1135 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-697869\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-697869\" > >"}
	{"level":"warn","ts":"2024-08-26T12:29:30.276829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.111651ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15718004216196545546 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.11\" mod_revision:1137 > success:<request_put:<key:\"/registry/masterleases/192.168.61.11\" value_size:66 lease:6494632179341769736 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.11\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-26T12:29:30.277228Z","caller":"traceutil/trace.go:171","msg":"trace[1821201382] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"191.251551ms","start":"2024-08-26T12:29:30.085958Z","end":"2024-08-26T12:29:30.277210Z","steps":["trace[1821201382] 'process raft request'  (duration: 64.481699ms)","trace[1821201382] 'compare'  (duration: 126.009614ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T12:30:06.366267Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":930}
	{"level":"info","ts":"2024-08-26T12:30:06.370965Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":930,"took":"3.996613ms","hash":16672547,"current-db-size-bytes":2252800,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-26T12:30:06.371150Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":16672547,"revision":930,"compact-revision":687}
	{"level":"info","ts":"2024-08-26T12:30:21.272677Z","caller":"traceutil/trace.go:171","msg":"trace[115602684] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"337.335586ms","start":"2024-08-26T12:30:20.935302Z","end":"2024-08-26T12:30:21.272637Z","steps":["trace[115602684] 'process raft request'  (duration: 337.192725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T12:30:21.273007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-26T12:30:20.935283Z","time spent":"337.539351ms","remote":"127.0.0.1:35462","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1185 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-26T12:30:21.528964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.113823ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T12:30:21.529115Z","caller":"traceutil/trace.go:171","msg":"trace[404684286] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1187; }","duration":"143.293847ms","start":"2024-08-26T12:30:21.385805Z","end":"2024-08-26T12:30:21.529098Z","steps":["trace[404684286] 'range keys from in-memory index tree'  (duration: 143.099956ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T12:31:11.675355Z","caller":"traceutil/trace.go:171","msg":"trace[2091086393] linearizableReadLoop","detail":"{readStateIndex:1437; appliedIndex:1436; }","duration":"104.619641ms","start":"2024-08-26T12:31:11.570713Z","end":"2024-08-26T12:31:11.675333Z","steps":["trace[2091086393] 'read index received'  (duration: 104.451497ms)","trace[2091086393] 'applied index is now lower than readState.Index'  (duration: 167.514µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T12:31:11.675454Z","caller":"traceutil/trace.go:171","msg":"trace[1586373466] transaction","detail":"{read_only:false; response_revision:1229; number_of_response:1; }","duration":"116.382171ms","start":"2024-08-26T12:31:11.559065Z","end":"2024-08-26T12:31:11.675447Z","steps":["trace[1586373466] 'process raft request'  (duration: 116.122509ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T12:31:11.675646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.914314ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T12:31:11.675683Z","caller":"traceutil/trace.go:171","msg":"trace[827377519] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1229; }","duration":"104.970024ms","start":"2024-08-26T12:31:11.570707Z","end":"2024-08-26T12:31:11.675677Z","steps":["trace[827377519] 'agreement among raft nodes before linearized reading'  (duration: 104.89813ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T12:31:37.952385Z","caller":"traceutil/trace.go:171","msg":"trace[537798475] transaction","detail":"{read_only:false; response_revision:1248; number_of_response:1; }","duration":"128.359284ms","start":"2024-08-26T12:31:37.823983Z","end":"2024-08-26T12:31:37.952342Z","steps":["trace[537798475] 'process raft request'  (duration: 128.19914ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T12:31:38.370000Z","caller":"traceutil/trace.go:171","msg":"trace[1983079264] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"539.895964ms","start":"2024-08-26T12:31:37.830079Z","end":"2024-08-26T12:31:38.369975Z","steps":["trace[1983079264] 'process raft request'  (duration: 539.668942ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T12:31:38.370271Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-26T12:31:37.830061Z","time spent":"540.068917ms","remote":"127.0.0.1:35554","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-slye5tzhehoxtcngh36pk6unbi\" mod_revision:1240 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-slye5tzhehoxtcngh36pk6unbi\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-slye5tzhehoxtcngh36pk6unbi\" > >"}
	{"level":"info","ts":"2024-08-26T12:31:39.953340Z","caller":"traceutil/trace.go:171","msg":"trace[499748161] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"139.855574ms","start":"2024-08-26T12:31:39.813465Z","end":"2024-08-26T12:31:39.953321Z","steps":["trace[499748161] 'process raft request'  (duration: 139.71689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-26T12:31:40.143874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.458429ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15718004216196546330 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1248 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-26T12:31:40.144089Z","caller":"traceutil/trace.go:171","msg":"trace[2093263164] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"180.804896ms","start":"2024-08-26T12:31:39.963260Z","end":"2024-08-26T12:31:40.144065Z","steps":["trace[2093263164] 'process raft request'  (duration: 53.834615ms)","trace[2093263164] 'compare'  (duration: 126.291543ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-26T12:31:40.942680Z","caller":"traceutil/trace.go:171","msg":"trace[1480153533] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"133.406077ms","start":"2024-08-26T12:31:40.809244Z","end":"2024-08-26T12:31:40.942650Z","steps":["trace[1480153533] 'process raft request'  (duration: 133.20996ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T12:32:08.585328Z","caller":"traceutil/trace.go:171","msg":"trace[1020181385] transaction","detail":"{read_only:false; response_revision:1274; number_of_response:1; }","duration":"268.711489ms","start":"2024-08-26T12:32:08.316414Z","end":"2024-08-26T12:32:08.585126Z","steps":["trace[1020181385] 'process raft request'  (duration: 268.179009ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T12:32:43.113854Z","caller":"traceutil/trace.go:171","msg":"trace[932328994] transaction","detail":"{read_only:false; response_revision:1303; number_of_response:1; }","duration":"288.450795ms","start":"2024-08-26T12:32:42.825374Z","end":"2024-08-26T12:32:43.113825Z","steps":["trace[932328994] 'process raft request'  (duration: 288.24608ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:33:20 up 23 min,  0 users,  load average: 0.16, 0.17, 0.14
	Linux default-k8s-diff-port-697869 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14a06fb6265b2c3e8e92cbf2eca67ed4fa5cce9bcb081a3c2122aaccbeaf6386] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0826 12:30:09.200169       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:30:09.200369       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0826 12:30:09.201554       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:30:09.201633       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:31:09.201922       1 handler_proxy.go:99] no RequestInfo found in the context
	W0826 12:31:09.201956       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:31:09.202485       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0826 12:31:09.202635       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:31:09.203697       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:31:09.203773       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:33:09.204393       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:33:09.204590       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0826 12:33:09.204394       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:33:09.204646       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0826 12:33:09.205982       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:33:09.206098       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [8a8ee2b12fd338d4948889ba067056c0ef0fe9ac12a1c235efb58e3e583e12e7] <==
	W0826 12:14:59.975893       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.975986       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.979555       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.988357       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.998127       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.998214       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:14:59.998476       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.037934       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.039403       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.039417       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.073418       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.084629       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.094405       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.101158       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.113258       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.195492       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.223868       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.262453       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.272216       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.382654       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.389272       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.389577       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.491965       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.594453       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:00.664350       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e74ae7c4012958cff76b86e6542f85ea6ff45bbfefbffc8f2b3d8f3b11449dc5] <==
	E0826 12:28:15.299350       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:28:15.764196       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:28:45.306188       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:28:45.773257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:29:15.313101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:29:15.781419       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:29:45.320174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:29:45.790978       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:30:15.327756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:30:15.800579       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:30:39.341239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-697869"
	E0826 12:30:45.335737       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:30:45.809848       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:31:15.347152       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:31:15.820693       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:31:39.958725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="988.989µs"
	E0826 12:31:45.355496       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:31:45.828998       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:31:51.819457       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="202.702µs"
	E0826 12:32:15.363594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:32:15.839575       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:32:45.370979       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:32:45.849093       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:33:15.379597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:33:15.860740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [db02b9eeafe0bdad936dd247fa4b6bc0f362b0eff287de111756a61823a6b654] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 12:15:17.061137       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 12:15:17.075555       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.11"]
	E0826 12:15:17.075631       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 12:15:17.302425       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 12:15:17.302496       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 12:15:17.302542       1 server_linux.go:169] "Using iptables Proxier"
	I0826 12:15:17.307274       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 12:15:17.307604       1 server.go:483] "Version info" version="v1.31.0"
	I0826 12:15:17.307626       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:15:17.309334       1 config.go:197] "Starting service config controller"
	I0826 12:15:17.309360       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 12:15:17.309389       1 config.go:104] "Starting endpoint slice config controller"
	I0826 12:15:17.309393       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 12:15:17.330223       1 config.go:326] "Starting node config controller"
	I0826 12:15:17.330294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 12:15:17.411848       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 12:15:17.412314       1 shared_informer.go:320] Caches are synced for service config
	I0826 12:15:17.430964       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [db6eabb03fe18bffd5077afd34a30179d2bfb088eec8450fa034ec0924b9ff22] <==
	W0826 12:15:09.049948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:09.050006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.057078       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 12:15:09.057117       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 12:15:09.086218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 12:15:09.086276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.094610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0826 12:15:09.094656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.121636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:09.121689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.222400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 12:15:09.222456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.285634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 12:15:09.286382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.310195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0826 12:15:09.310257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.444300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:09.444365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.548597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:09.548647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.609179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 12:15:09.609239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:09.610708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 12:15:09.610774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0826 12:15:11.512940       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:32:14 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:14.800656    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:32:21 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:21.082540    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675541081657720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:32:21 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:21.082575    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675541081657720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:32:25 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:25.800224    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:32:31 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:31.086188    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675551084547953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:32:31 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:31.086290    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675551084547953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:32:37 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:37.801285    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:32:41 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:41.093159    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675561092230979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:32:41 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:41.093191    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675561092230979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:32:50 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:50.801530    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:32:51 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:51.096927    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675571096246676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:32:51 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:32:51.097050    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675571096246676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:33:01 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:01.101186    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675581100096934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:33:01 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:01.101649    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675581100096934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:33:02 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:02.801566    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:33:10 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:10.811188    2881 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 12:33:10 default-k8s-diff-port-697869 kubelet[2881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 12:33:10 default-k8s-diff-port-697869 kubelet[2881]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 12:33:10 default-k8s-diff-port-697869 kubelet[2881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 12:33:10 default-k8s-diff-port-697869 kubelet[2881]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 12:33:11 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:11.103777    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675591103452759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:33:11 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:11.103822    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675591103452759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:33:14 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:14.801161    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7d2qs" podUID="c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d"
	Aug 26 12:33:21 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:21.106206    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675601105588329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:33:21 default-k8s-diff-port-697869 kubelet[2881]: E0826 12:33:21.106239    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675601105588329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [270d1832bad4add621c2ab246e24086ae191ef63d90826b9581ebedba771a185] <==
	I0826 12:15:17.975676       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 12:15:18.031766       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 12:15:18.031849       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 12:15:18.054506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 12:15:18.054675       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-697869_656b91ad-0335-4727-8ce1-96984fc792ed!
	I0826 12:15:18.054774       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e92b5516-2d40-428d-bcd3-b1afcc4daa01", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-697869_656b91ad-0335-4727-8ce1-96984fc792ed became leader
	I0826 12:15:18.156329       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-697869_656b91ad-0335-4727-8ce1-96984fc792ed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-697869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7d2qs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-697869 describe pod metrics-server-6867b74b74-7d2qs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-697869 describe pod metrics-server-6867b74b74-7d2qs: exit status 1 (73.858741ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7d2qs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-697869 describe pod metrics-server-6867b74b74-7d2qs: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (532.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (335.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956479 -n no-preload-956479
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-26 12:30:35.239075956 +0000 UTC m=+6240.564741683
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-956479 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-956479 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.472µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-956479 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-956479 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-956479 logs -n 25: (1.352751779s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148783 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	|         | disable-driver-mounts-148783                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:04 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-839656        | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-697869  | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956479                  | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-923586                 | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-839656             | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697869       | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC | 26 Aug 24 12:15 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:28 UTC | 26 Aug 24 12:28 UTC |
	| start   | -p newest-cni-114926 --memory=2200 --alsologtostderr   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:28 UTC | 26 Aug 24 12:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-114926             | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:29 UTC | 26 Aug 24 12:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-114926                                   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:29 UTC | 26 Aug 24 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-114926                  | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:29 UTC | 26 Aug 24 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-114926 --memory=2200 --alsologtostderr   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:29 UTC | 26 Aug 24 12:30 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-114926 image list                           | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-114926                                   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-114926                                   | newest-cni-114926            | jenkins | v1.33.1 | 26 Aug 24 12:30 UTC | 26 Aug 24 12:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:29:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:29:54.570899  160268 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:29:54.571160  160268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:29:54.571170  160268 out.go:358] Setting ErrFile to fd 2...
	I0826 12:29:54.571179  160268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:29:54.571393  160268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:29:54.571952  160268 out.go:352] Setting JSON to false
	I0826 12:29:54.572902  160268 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7936,"bootTime":1724667459,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:29:54.572965  160268 start.go:139] virtualization: kvm guest
	I0826 12:29:54.575486  160268 out.go:177] * [newest-cni-114926] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:29:54.576953  160268 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:29:54.576985  160268 notify.go:220] Checking for updates...
	I0826 12:29:54.579765  160268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:29:54.581172  160268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:29:54.582403  160268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:29:54.583667  160268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:29:54.584884  160268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:29:54.586802  160268 config.go:182] Loaded profile config "newest-cni-114926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:29:54.587457  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:29:54.587554  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:29:54.603281  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0826 12:29:54.603744  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:29:54.604371  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:29:54.604394  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:29:54.604792  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:29:54.605085  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:29:54.605361  160268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:29:54.605745  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:29:54.605789  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:29:54.622652  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I0826 12:29:54.623166  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:29:54.624170  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:29:54.624197  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:29:54.624569  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:29:54.624807  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:29:54.665230  160268 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:29:54.666444  160268 start.go:297] selected driver: kvm2
	I0826 12:29:54.666463  160268 start.go:901] validating driver "kvm2" against &{Name:newest-cni-114926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-114926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:29:54.666589  160268 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:29:54.667370  160268 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:29:54.667450  160268 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:29:54.683573  160268 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:29:54.683954  160268 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0826 12:29:54.684026  160268 cni.go:84] Creating CNI manager for ""
	I0826 12:29:54.684033  160268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:29:54.684087  160268 start.go:340] cluster config:
	{Name:newest-cni-114926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-114926 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:29:54.684208  160268 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:29:54.686005  160268 out.go:177] * Starting "newest-cni-114926" primary control-plane node in "newest-cni-114926" cluster
	I0826 12:29:54.687155  160268 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:29:54.687202  160268 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:29:54.687215  160268 cache.go:56] Caching tarball of preloaded images
	I0826 12:29:54.687314  160268 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:29:54.687327  160268 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:29:54.687463  160268 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/config.json ...
	I0826 12:29:54.687673  160268 start.go:360] acquireMachinesLock for newest-cni-114926: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:29:54.687724  160268 start.go:364] duration metric: took 30.722µs to acquireMachinesLock for "newest-cni-114926"
	I0826 12:29:54.687738  160268 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:29:54.687746  160268 fix.go:54] fixHost starting: 
	I0826 12:29:54.688039  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:29:54.688080  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:29:54.705820  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0826 12:29:54.706327  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:29:54.706809  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:29:54.706851  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:29:54.707215  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:29:54.707398  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:29:54.707604  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetState
	I0826 12:29:54.709222  160268 fix.go:112] recreateIfNeeded on newest-cni-114926: state=Stopped err=<nil>
	I0826 12:29:54.709267  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	W0826 12:29:54.709435  160268 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:29:54.711435  160268 out.go:177] * Restarting existing kvm2 VM for "newest-cni-114926" ...
	I0826 12:29:54.712797  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Start
	I0826 12:29:54.713023  160268 main.go:141] libmachine: (newest-cni-114926) Ensuring networks are active...
	I0826 12:29:54.713881  160268 main.go:141] libmachine: (newest-cni-114926) Ensuring network default is active
	I0826 12:29:54.714626  160268 main.go:141] libmachine: (newest-cni-114926) Ensuring network mk-newest-cni-114926 is active
	I0826 12:29:54.717125  160268 main.go:141] libmachine: (newest-cni-114926) Getting domain xml...
	I0826 12:29:54.717940  160268 main.go:141] libmachine: (newest-cni-114926) Creating domain...
	I0826 12:29:56.013953  160268 main.go:141] libmachine: (newest-cni-114926) Waiting to get IP...
	I0826 12:29:56.014781  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:29:56.015293  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:29:56.015349  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:29:56.015254  160303 retry.go:31] will retry after 207.662166ms: waiting for machine to come up
	I0826 12:29:56.224791  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:29:56.225491  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:29:56.225522  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:29:56.225439  160303 retry.go:31] will retry after 314.104242ms: waiting for machine to come up
	I0826 12:29:56.540854  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:29:56.541246  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:29:56.541313  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:29:56.541220  160303 retry.go:31] will retry after 425.829078ms: waiting for machine to come up
	I0826 12:29:56.968795  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:29:56.969339  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:29:56.969369  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:29:56.969278  160303 retry.go:31] will retry after 585.124601ms: waiting for machine to come up
	I0826 12:29:57.555964  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:29:57.556525  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:29:57.556549  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:29:57.556475  160303 retry.go:31] will retry after 724.610285ms: waiting for machine to come up
	I0826 12:29:58.282355  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:29:58.282893  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:29:58.282925  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:29:58.282816  160303 retry.go:31] will retry after 736.448714ms: waiting for machine to come up
	I0826 12:29:59.020709  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:29:59.021075  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:29:59.021103  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:29:59.021040  160303 retry.go:31] will retry after 948.878596ms: waiting for machine to come up
	I0826 12:29:59.972115  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:29:59.972671  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:29:59.972698  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:29:59.972616  160303 retry.go:31] will retry after 1.263115273s: waiting for machine to come up
	I0826 12:30:01.237501  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:01.237955  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:30:01.237987  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:30:01.237902  160303 retry.go:31] will retry after 1.43874857s: waiting for machine to come up
	I0826 12:30:02.677864  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:02.678314  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:30:02.678429  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:30:02.678291  160303 retry.go:31] will retry after 2.035686795s: waiting for machine to come up
	I0826 12:30:04.716198  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:04.716725  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:30:04.716757  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:30:04.716658  160303 retry.go:31] will retry after 1.902042451s: waiting for machine to come up
	I0826 12:30:06.619984  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:06.620535  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:30:06.620582  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:30:06.620474  160303 retry.go:31] will retry after 2.889899731s: waiting for machine to come up
	I0826 12:30:09.512157  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:09.512618  160268 main.go:141] libmachine: (newest-cni-114926) DBG | unable to find current IP address of domain newest-cni-114926 in network mk-newest-cni-114926
	I0826 12:30:09.512667  160268 main.go:141] libmachine: (newest-cni-114926) DBG | I0826 12:30:09.512555  160303 retry.go:31] will retry after 3.280172197s: waiting for machine to come up
	I0826 12:30:12.796818  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:12.797356  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has current primary IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:12.797390  160268 main.go:141] libmachine: (newest-cni-114926) Found IP for machine: 192.168.72.54
	I0826 12:30:12.797405  160268 main.go:141] libmachine: (newest-cni-114926) Reserving static IP address...
	I0826 12:30:12.797895  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "newest-cni-114926", mac: "52:54:00:6b:45:c8", ip: "192.168.72.54"} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:12.797923  160268 main.go:141] libmachine: (newest-cni-114926) Reserved static IP address: 192.168.72.54
	I0826 12:30:12.797941  160268 main.go:141] libmachine: (newest-cni-114926) DBG | skip adding static IP to network mk-newest-cni-114926 - found existing host DHCP lease matching {name: "newest-cni-114926", mac: "52:54:00:6b:45:c8", ip: "192.168.72.54"}
	I0826 12:30:12.797961  160268 main.go:141] libmachine: (newest-cni-114926) DBG | Getting to WaitForSSH function...
	I0826 12:30:12.797974  160268 main.go:141] libmachine: (newest-cni-114926) Waiting for SSH to be available...
	I0826 12:30:12.800433  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:12.800878  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:12.800919  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:12.800985  160268 main.go:141] libmachine: (newest-cni-114926) DBG | Using SSH client type: external
	I0826 12:30:12.801040  160268 main.go:141] libmachine: (newest-cni-114926) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa (-rw-------)
	I0826 12:30:12.801078  160268 main.go:141] libmachine: (newest-cni-114926) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:30:12.801091  160268 main.go:141] libmachine: (newest-cni-114926) DBG | About to run SSH command:
	I0826 12:30:12.801104  160268 main.go:141] libmachine: (newest-cni-114926) DBG | exit 0
	I0826 12:30:12.931015  160268 main.go:141] libmachine: (newest-cni-114926) DBG | SSH cmd err, output: <nil>: 
	I0826 12:30:12.931368  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetConfigRaw
	I0826 12:30:12.932053  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetIP
	I0826 12:30:12.934791  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:12.935229  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:12.935260  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:12.935480  160268 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/config.json ...
	I0826 12:30:12.935695  160268 machine.go:93] provisionDockerMachine start ...
	I0826 12:30:12.935715  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:12.935962  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:12.938278  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:12.938700  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:12.938733  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:12.938929  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:12.939124  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:12.939300  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:12.939525  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:12.939677  160268 main.go:141] libmachine: Using SSH client type: native
	I0826 12:30:12.939887  160268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0826 12:30:12.939901  160268 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:30:13.051701  160268 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:30:13.051737  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetMachineName
	I0826 12:30:13.052028  160268 buildroot.go:166] provisioning hostname "newest-cni-114926"
	I0826 12:30:13.052055  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetMachineName
	I0826 12:30:13.052268  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:13.055221  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.055647  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:13.055681  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.055877  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:13.056076  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:13.056233  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:13.056370  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:13.056556  160268 main.go:141] libmachine: Using SSH client type: native
	I0826 12:30:13.056735  160268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0826 12:30:13.056761  160268 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-114926 && echo "newest-cni-114926" | sudo tee /etc/hostname
	I0826 12:30:13.189582  160268 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-114926
	
	I0826 12:30:13.189612  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:13.192311  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.192706  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:13.192735  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.192858  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:13.193087  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:13.193258  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:13.193450  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:13.193640  160268 main.go:141] libmachine: Using SSH client type: native
	I0826 12:30:13.193851  160268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0826 12:30:13.193869  160268 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-114926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-114926/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-114926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:30:13.311623  160268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:30:13.311695  160268 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:30:13.311760  160268 buildroot.go:174] setting up certificates
	I0826 12:30:13.311771  160268 provision.go:84] configureAuth start
	I0826 12:30:13.311783  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetMachineName
	I0826 12:30:13.312144  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetIP
	I0826 12:30:13.314943  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.315308  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:13.315338  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.315500  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:13.318320  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.318662  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:13.318714  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.318893  160268 provision.go:143] copyHostCerts
	I0826 12:30:13.318965  160268 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:30:13.318989  160268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:30:13.319089  160268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:30:13.319221  160268 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:30:13.319233  160268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:30:13.319262  160268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:30:13.319369  160268 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:30:13.319378  160268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:30:13.319405  160268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:30:13.319457  160268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.newest-cni-114926 san=[127.0.0.1 192.168.72.54 localhost minikube newest-cni-114926]
	I0826 12:30:13.485241  160268 provision.go:177] copyRemoteCerts
	I0826 12:30:13.485301  160268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:30:13.485329  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:13.488750  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.489081  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:13.489135  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.489342  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:13.489560  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:13.489793  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:13.489965  160268 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa Username:docker}
	I0826 12:30:13.577710  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:30:13.601971  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:30:13.625183  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:30:13.652488  160268 provision.go:87] duration metric: took 340.704012ms to configureAuth
	I0826 12:30:13.652524  160268 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:30:13.652772  160268 config.go:182] Loaded profile config "newest-cni-114926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:30:13.652873  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:13.656157  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.656603  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:13.656630  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.657028  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:13.657320  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:13.657528  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:13.657750  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:13.658030  160268 main.go:141] libmachine: Using SSH client type: native
	I0826 12:30:13.658262  160268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0826 12:30:13.658291  160268 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:30:13.948478  160268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:30:13.948509  160268 machine.go:96] duration metric: took 1.012799036s to provisionDockerMachine
	I0826 12:30:13.948525  160268 start.go:293] postStartSetup for "newest-cni-114926" (driver="kvm2")
	I0826 12:30:13.948538  160268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:30:13.948567  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:13.948942  160268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:30:13.948977  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:13.951978  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.952376  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:13.952404  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:13.952619  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:13.952861  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:13.953033  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:13.953186  160268 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa Username:docker}
	I0826 12:30:14.038319  160268 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:30:14.042651  160268 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:30:14.042682  160268 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:30:14.042755  160268 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:30:14.042876  160268 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:30:14.042984  160268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:30:14.053449  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:30:14.078648  160268 start.go:296] duration metric: took 130.103884ms for postStartSetup
	I0826 12:30:14.078708  160268 fix.go:56] duration metric: took 19.390960845s for fixHost
	I0826 12:30:14.078736  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:14.081862  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:14.082239  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:14.082269  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:14.082460  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:14.082702  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:14.082984  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:14.083186  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:14.083429  160268 main.go:141] libmachine: Using SSH client type: native
	I0826 12:30:14.083652  160268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0826 12:30:14.083671  160268 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:30:14.195589  160268 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724675414.153468144
	
	I0826 12:30:14.195613  160268 fix.go:216] guest clock: 1724675414.153468144
	I0826 12:30:14.195621  160268 fix.go:229] Guest: 2024-08-26 12:30:14.153468144 +0000 UTC Remote: 2024-08-26 12:30:14.078714372 +0000 UTC m=+19.545271729 (delta=74.753772ms)
	I0826 12:30:14.195641  160268 fix.go:200] guest clock delta is within tolerance: 74.753772ms
	I0826 12:30:14.195646  160268 start.go:83] releasing machines lock for "newest-cni-114926", held for 19.507913612s
	I0826 12:30:14.195664  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:14.195938  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetIP
	I0826 12:30:14.199041  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:14.199429  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:14.199456  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:14.199673  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:14.200164  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:14.200369  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:14.200461  160268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:30:14.200521  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:14.200555  160268 ssh_runner.go:195] Run: cat /version.json
	I0826 12:30:14.200585  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:14.203299  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:14.203701  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:14.203730  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:14.203753  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:14.203872  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:14.204076  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:14.204268  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:14.204292  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:14.204296  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:14.204469  160268 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa Username:docker}
	I0826 12:30:14.204518  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:14.204675  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:14.204861  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:14.205023  160268 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa Username:docker}
	I0826 12:30:14.292119  160268 ssh_runner.go:195] Run: systemctl --version
	I0826 12:30:14.328262  160268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:30:14.472025  160268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:30:14.478385  160268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:30:14.478475  160268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:30:14.497231  160268 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:30:14.497265  160268 start.go:495] detecting cgroup driver to use...
	I0826 12:30:14.497346  160268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:30:14.514514  160268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:30:14.529500  160268 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:30:14.529579  160268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:30:14.544825  160268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:30:14.559635  160268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:30:14.680845  160268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:30:14.856439  160268 docker.go:233] disabling docker service ...
	I0826 12:30:14.856531  160268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:30:14.872083  160268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:30:14.886156  160268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:30:15.006176  160268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:30:15.126435  160268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:30:15.141816  160268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:30:15.161034  160268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:30:15.161108  160268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:30:15.171182  160268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:30:15.171272  160268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:30:15.181623  160268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:30:15.191849  160268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:30:15.201526  160268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:30:15.211428  160268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:30:15.220931  160268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:30:15.239245  160268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:30:15.252165  160268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:30:15.261115  160268 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:30:15.261189  160268 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:30:15.274625  160268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:30:15.284003  160268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:30:15.401097  160268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:30:15.540737  160268 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:30:15.540832  160268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:30:15.545554  160268 start.go:563] Will wait 60s for crictl version
	I0826 12:30:15.545625  160268 ssh_runner.go:195] Run: which crictl
	I0826 12:30:15.549066  160268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:30:15.588594  160268 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:30:15.588692  160268 ssh_runner.go:195] Run: crio --version
	I0826 12:30:15.616829  160268 ssh_runner.go:195] Run: crio --version
	I0826 12:30:15.647564  160268 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:30:15.649203  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetIP
	I0826 12:30:15.652330  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:15.652614  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:15.652649  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:15.652907  160268 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 12:30:15.657131  160268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:30:15.672366  160268 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0826 12:30:15.673671  160268 kubeadm.go:883] updating cluster {Name:newest-cni-114926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-114926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:30:15.673838  160268 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:30:15.673902  160268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:30:15.717066  160268 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:30:15.717152  160268 ssh_runner.go:195] Run: which lz4
	I0826 12:30:15.721055  160268 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:30:15.725158  160268 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:30:15.725198  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:30:17.052701  160268 crio.go:462] duration metric: took 1.331674639s to copy over tarball
	I0826 12:30:17.052786  160268 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:30:19.283353  160268 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.230531503s)
	I0826 12:30:19.283401  160268 crio.go:469] duration metric: took 2.230668229s to extract the tarball
	I0826 12:30:19.283413  160268 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:30:19.324459  160268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:30:19.371840  160268 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:30:19.371871  160268 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:30:19.371882  160268 kubeadm.go:934] updating node { 192.168.72.54 8443 v1.31.0 crio true true} ...
	I0826 12:30:19.372044  160268 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-114926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-114926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:30:19.372148  160268 ssh_runner.go:195] Run: crio config
	I0826 12:30:19.426096  160268 cni.go:84] Creating CNI manager for ""
	I0826 12:30:19.426118  160268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:30:19.426128  160268 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0826 12:30:19.426152  160268 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-114926 NodeName:newest-cni-114926 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:30:19.426298  160268 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-114926"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:30:19.426377  160268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:30:19.437458  160268 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:30:19.437544  160268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:30:19.447841  160268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0826 12:30:19.465485  160268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:30:19.483021  160268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0826 12:30:19.500268  160268 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I0826 12:30:19.503985  160268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:30:19.517097  160268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:30:19.638990  160268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:30:19.659614  160268 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926 for IP: 192.168.72.54
	I0826 12:30:19.659649  160268 certs.go:194] generating shared ca certs ...
	I0826 12:30:19.659665  160268 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:30:19.659806  160268 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:30:19.659847  160268 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:30:19.659856  160268 certs.go:256] generating profile certs ...
	I0826 12:30:19.659935  160268 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/client.key
	I0826 12:30:19.660005  160268 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/apiserver.key.0b07f1bb
	I0826 12:30:19.660037  160268 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/proxy-client.key
	I0826 12:30:19.660163  160268 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:30:19.660190  160268 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:30:19.660197  160268 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:30:19.660239  160268 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:30:19.660270  160268 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:30:19.660304  160268 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:30:19.660364  160268 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:30:19.661106  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:30:19.701128  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:30:19.736682  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:30:19.776536  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:30:19.819357  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 12:30:19.853732  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:30:19.881859  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:30:19.907991  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/newest-cni-114926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:30:19.933180  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:30:19.957260  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:30:19.983143  160268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:30:20.007253  160268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:30:20.024389  160268 ssh_runner.go:195] Run: openssl version
	I0826 12:30:20.030181  160268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:30:20.042070  160268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:30:20.047142  160268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:30:20.047212  160268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:30:20.054123  160268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:30:20.067163  160268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:30:20.080220  160268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:30:20.084749  160268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:30:20.084819  160268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:30:20.090600  160268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:30:20.101772  160268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:30:20.113362  160268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:30:20.118088  160268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:30:20.118152  160268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:30:20.123846  160268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:30:20.135208  160268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:30:20.141340  160268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:30:20.147895  160268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:30:20.154911  160268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:30:20.162146  160268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:30:20.168555  160268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:30:20.175003  160268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:30:20.181997  160268 kubeadm.go:392] StartCluster: {Name:newest-cni-114926 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-114926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:30:20.182094  160268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:30:20.182149  160268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:30:20.232093  160268 cri.go:89] found id: ""
	I0826 12:30:20.232176  160268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:30:20.243622  160268 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:30:20.243650  160268 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:30:20.243708  160268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:30:20.253967  160268 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:30:20.255513  160268 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-114926" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:30:20.256443  160268 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-114926" cluster setting kubeconfig missing "newest-cni-114926" context setting]
	I0826 12:30:20.257728  160268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:30:20.259689  160268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:30:20.272060  160268 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I0826 12:30:20.272102  160268 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:30:20.272134  160268 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:30:20.272189  160268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:30:20.308136  160268 cri.go:89] found id: ""
	I0826 12:30:20.308228  160268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:30:20.327907  160268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:30:20.339163  160268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:30:20.339193  160268 kubeadm.go:157] found existing configuration files:
	
	I0826 12:30:20.339284  160268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:30:20.350352  160268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:30:20.350411  160268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:30:20.361449  160268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:30:20.372223  160268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:30:20.372307  160268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:30:20.383849  160268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:30:20.394591  160268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:30:20.394665  160268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:30:20.406118  160268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:30:20.416850  160268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:30:20.416923  160268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:30:20.427322  160268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:30:20.437446  160268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:30:20.567242  160268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:30:21.484651  160268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:30:21.703678  160268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:30:21.785790  160268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:30:21.887466  160268 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:30:21.887588  160268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:30:22.388049  160268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:30:22.888091  160268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:30:22.932295  160268 api_server.go:72] duration metric: took 1.044839641s to wait for apiserver process to appear ...
	I0826 12:30:22.932337  160268 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:30:22.932366  160268 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I0826 12:30:22.933029  160268 api_server.go:269] stopped: https://192.168.72.54:8443/healthz: Get "https://192.168.72.54:8443/healthz": dial tcp 192.168.72.54:8443: connect: connection refused
	I0826 12:30:23.432592  160268 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I0826 12:30:25.651821  160268 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:30:25.651861  160268 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:30:25.651880  160268 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I0826 12:30:25.662385  160268 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:30:25.662428  160268 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:30:25.932852  160268 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I0826 12:30:25.938198  160268 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:30:25.938234  160268 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:30:26.432771  160268 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I0826 12:30:26.437687  160268 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:30:26.437723  160268 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:30:26.933338  160268 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I0826 12:30:26.937669  160268 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I0826 12:30:26.944069  160268 api_server.go:141] control plane version: v1.31.0
	I0826 12:30:26.944102  160268 api_server.go:131] duration metric: took 4.011755338s to wait for apiserver health ...
	I0826 12:30:26.944112  160268 cni.go:84] Creating CNI manager for ""
	I0826 12:30:26.944118  160268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:30:26.946301  160268 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:30:26.947975  160268 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:30:26.958224  160268 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:30:27.007418  160268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:30:27.022323  160268 system_pods.go:59] 8 kube-system pods found
	I0826 12:30:27.022363  160268 system_pods.go:61] "coredns-6f6b679f8f-w5h4x" [5cac337f-cb90-4fe7-9927-ffc34d8c4126] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:30:27.022371  160268 system_pods.go:61] "etcd-newest-cni-114926" [2bc02488-e47e-4657-b726-3f14634521fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:30:27.022379  160268 system_pods.go:61] "kube-apiserver-newest-cni-114926" [026ce8de-0d72-4430-b4cb-38e12c7758b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:30:27.022385  160268 system_pods.go:61] "kube-controller-manager-newest-cni-114926" [dc817012-d293-4912-8f71-760449ed286c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:30:27.022391  160268 system_pods.go:61] "kube-proxy-fmt9g" [73b909cd-c911-4fbf-a6a8-344244040fc6] Running
	I0826 12:30:27.022396  160268 system_pods.go:61] "kube-scheduler-newest-cni-114926" [c7751ebd-32ee-40a7-b982-3e2379b27c3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:30:27.022401  160268 system_pods.go:61] "metrics-server-6867b74b74-576k8" [87dc59e2-38e4-4de2-8469-3358a1398a15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:30:27.022406  160268 system_pods.go:61] "storage-provisioner" [22dd23b5-1731-4046-a222-9a39e031bf44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:30:27.022415  160268 system_pods.go:74] duration metric: took 14.973608ms to wait for pod list to return data ...
	I0826 12:30:27.022424  160268 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:30:27.026385  160268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:30:27.026428  160268 node_conditions.go:123] node cpu capacity is 2
	I0826 12:30:27.026450  160268 node_conditions.go:105] duration metric: took 4.021263ms to run NodePressure ...
	I0826 12:30:27.026476  160268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:30:27.318121  160268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:30:27.330011  160268 ops.go:34] apiserver oom_adj: -16
	I0826 12:30:27.330044  160268 kubeadm.go:597] duration metric: took 7.086385821s to restartPrimaryControlPlane
	I0826 12:30:27.330058  160268 kubeadm.go:394] duration metric: took 7.148074015s to StartCluster
	I0826 12:30:27.330081  160268 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:30:27.330202  160268 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:30:27.332086  160268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:30:27.332371  160268 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:30:27.332456  160268 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:30:27.332565  160268 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-114926"
	I0826 12:30:27.332597  160268 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-114926"
	W0826 12:30:27.332609  160268 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:30:27.332608  160268 config.go:182] Loaded profile config "newest-cni-114926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:30:27.332591  160268 addons.go:69] Setting default-storageclass=true in profile "newest-cni-114926"
	I0826 12:30:27.332629  160268 addons.go:69] Setting metrics-server=true in profile "newest-cni-114926"
	I0826 12:30:27.332646  160268 host.go:66] Checking if "newest-cni-114926" exists ...
	I0826 12:30:27.332673  160268 addons.go:234] Setting addon metrics-server=true in "newest-cni-114926"
	W0826 12:30:27.332687  160268 addons.go:243] addon metrics-server should already be in state true
	I0826 12:30:27.332688  160268 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-114926"
	I0826 12:30:27.332723  160268 host.go:66] Checking if "newest-cni-114926" exists ...
	I0826 12:30:27.332620  160268 addons.go:69] Setting dashboard=true in profile "newest-cni-114926"
	I0826 12:30:27.332767  160268 addons.go:234] Setting addon dashboard=true in "newest-cni-114926"
	W0826 12:30:27.332781  160268 addons.go:243] addon dashboard should already be in state true
	I0826 12:30:27.332810  160268 host.go:66] Checking if "newest-cni-114926" exists ...
	I0826 12:30:27.333044  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.333057  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.333090  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.333095  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.333099  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.333119  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.333161  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.333190  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.335273  160268 out.go:177] * Verifying Kubernetes components...
	I0826 12:30:27.336735  160268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:30:27.351686  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0826 12:30:27.352232  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.352825  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.352849  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.353189  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0826 12:30:27.353400  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.353783  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.354294  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.354342  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.354539  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0826 12:30:27.354812  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.354854  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.355030  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.355256  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.355454  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I0826 12:30:27.355564  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.355579  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.355806  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.355850  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.355937  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.355983  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.356371  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.356389  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.356509  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.356566  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.356711  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.356844  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetState
	I0826 12:30:27.360821  160268 addons.go:234] Setting addon default-storageclass=true in "newest-cni-114926"
	W0826 12:30:27.360846  160268 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:30:27.360878  160268 host.go:66] Checking if "newest-cni-114926" exists ...
	I0826 12:30:27.361238  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.361284  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.376357  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0826 12:30:27.376396  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0826 12:30:27.376624  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0826 12:30:27.376912  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.376956  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.377152  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.377406  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.377413  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.377426  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.377430  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.377594  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.377605  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.377742  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.377779  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.377871  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetState
	I0826 12:30:27.377926  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.377955  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetState
	I0826 12:30:27.378062  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetState
	I0826 12:30:27.380173  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:27.380635  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:27.380952  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:27.382230  160268 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0826 12:30:27.382241  160268 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:30:27.383084  160268 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:30:27.383940  160268 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:30:27.383960  160268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:30:27.383983  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:27.384553  160268 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0826 12:30:27.384628  160268 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:30:27.384644  160268 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:30:27.384673  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:27.385827  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0826 12:30:27.385845  160268 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0826 12:30:27.385869  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:27.388796  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:27.389247  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:27.389640  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:27.389688  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:27.389709  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:27.389726  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:27.390043  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:27.390055  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:27.390242  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:27.390279  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:27.390455  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:27.390487  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:27.390622  160268 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa Username:docker}
	I0826 12:30:27.390622  160268 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa Username:docker}
	I0826 12:30:27.390913  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:27.391158  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:27.391175  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:27.391485  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:27.391683  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:27.391849  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:27.391997  160268 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa Username:docker}
	I0826 12:30:27.393719  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0826 12:30:27.394078  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.394551  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.394565  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.394885  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.395448  160268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:30:27.395488  160268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:30:27.415864  160268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0826 12:30:27.416417  160268 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:30:27.417230  160268 main.go:141] libmachine: Using API Version  1
	I0826 12:30:27.417265  160268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:30:27.417647  160268 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:30:27.417870  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetState
	I0826 12:30:27.419989  160268 main.go:141] libmachine: (newest-cni-114926) Calling .DriverName
	I0826 12:30:27.420316  160268 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:30:27.420338  160268 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:30:27.420365  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHHostname
	I0826 12:30:27.423749  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:27.424129  160268 main.go:141] libmachine: (newest-cni-114926) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:45:c8", ip: ""} in network mk-newest-cni-114926: {Iface:virbr4 ExpiryTime:2024-08-26 13:30:05 +0000 UTC Type:0 Mac:52:54:00:6b:45:c8 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:newest-cni-114926 Clientid:01:52:54:00:6b:45:c8}
	I0826 12:30:27.424182  160268 main.go:141] libmachine: (newest-cni-114926) DBG | domain newest-cni-114926 has defined IP address 192.168.72.54 and MAC address 52:54:00:6b:45:c8 in network mk-newest-cni-114926
	I0826 12:30:27.424368  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHPort
	I0826 12:30:27.424557  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHKeyPath
	I0826 12:30:27.424736  160268 main.go:141] libmachine: (newest-cni-114926) Calling .GetSSHUsername
	I0826 12:30:27.424907  160268 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/newest-cni-114926/id_rsa Username:docker}
	I0826 12:30:27.522777  160268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:30:27.541115  160268 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:30:27.541230  160268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:30:27.554972  160268 api_server.go:72] duration metric: took 222.560494ms to wait for apiserver process to appear ...
	I0826 12:30:27.555001  160268 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:30:27.555019  160268 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I0826 12:30:27.559489  160268 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I0826 12:30:27.560710  160268 api_server.go:141] control plane version: v1.31.0
	I0826 12:30:27.560739  160268 api_server.go:131] duration metric: took 5.73057ms to wait for apiserver health ...
	I0826 12:30:27.560751  160268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:30:27.567290  160268 system_pods.go:59] 8 kube-system pods found
	I0826 12:30:27.567338  160268 system_pods.go:61] "coredns-6f6b679f8f-w5h4x" [5cac337f-cb90-4fe7-9927-ffc34d8c4126] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:30:27.567348  160268 system_pods.go:61] "etcd-newest-cni-114926" [2bc02488-e47e-4657-b726-3f14634521fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:30:27.567361  160268 system_pods.go:61] "kube-apiserver-newest-cni-114926" [026ce8de-0d72-4430-b4cb-38e12c7758b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:30:27.567370  160268 system_pods.go:61] "kube-controller-manager-newest-cni-114926" [dc817012-d293-4912-8f71-760449ed286c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:30:27.567380  160268 system_pods.go:61] "kube-proxy-fmt9g" [73b909cd-c911-4fbf-a6a8-344244040fc6] Running
	I0826 12:30:27.567389  160268 system_pods.go:61] "kube-scheduler-newest-cni-114926" [c7751ebd-32ee-40a7-b982-3e2379b27c3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:30:27.567404  160268 system_pods.go:61] "metrics-server-6867b74b74-576k8" [87dc59e2-38e4-4de2-8469-3358a1398a15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:30:27.567412  160268 system_pods.go:61] "storage-provisioner" [22dd23b5-1731-4046-a222-9a39e031bf44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:30:27.567424  160268 system_pods.go:74] duration metric: took 6.666154ms to wait for pod list to return data ...
	I0826 12:30:27.567439  160268 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:30:27.572114  160268 default_sa.go:45] found service account: "default"
	I0826 12:30:27.572141  160268 default_sa.go:55] duration metric: took 4.695334ms for default service account to be created ...
	I0826 12:30:27.572153  160268 kubeadm.go:582] duration metric: took 239.750651ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0826 12:30:27.572167  160268 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:30:27.575734  160268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:30:27.575764  160268 node_conditions.go:123] node cpu capacity is 2
	I0826 12:30:27.575778  160268 node_conditions.go:105] duration metric: took 3.605735ms to run NodePressure ...
	I0826 12:30:27.575793  160268 start.go:241] waiting for startup goroutines ...
	I0826 12:30:27.636633  160268 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:30:27.636657  160268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:30:27.645401  160268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:30:27.670095  160268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:30:27.670420  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0826 12:30:27.670446  160268 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0826 12:30:27.718129  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0826 12:30:27.718158  160268 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0826 12:30:27.718798  160268 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:30:27.718816  160268 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:30:27.783360  160268 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:30:27.783399  160268 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:30:27.819279  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0826 12:30:27.819315  160268 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0826 12:30:27.871367  160268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:30:27.908595  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0826 12:30:27.908620  160268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0826 12:30:28.039723  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0826 12:30:28.039753  160268 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0826 12:30:28.145771  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0826 12:30:28.145803  160268 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0826 12:30:28.229538  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0826 12:30:28.229568  160268 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0826 12:30:28.285974  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0826 12:30:28.286018  160268 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0826 12:30:28.318327  160268 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0826 12:30:28.318359  160268 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0826 12:30:28.338628  160268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0826 12:30:30.036812  160268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.391352498s)
	I0826 12:30:30.036819  160268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.36668421s)
	I0826 12:30:30.036923  160268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.165527793s)
	I0826 12:30:30.037016  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.037042  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.036920  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.037131  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.036963  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.037174  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.037489  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.037507  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.037518  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.037525  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.039246  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.039253  160268 main.go:141] libmachine: (newest-cni-114926) DBG | Closing plugin on server side
	I0826 12:30:30.039265  160268 main.go:141] libmachine: (newest-cni-114926) DBG | Closing plugin on server side
	I0826 12:30:30.039264  160268 main.go:141] libmachine: (newest-cni-114926) DBG | Closing plugin on server side
	I0826 12:30:30.039283  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.039289  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.039290  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.039301  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.039301  160268 addons.go:475] Verifying addon metrics-server=true in "newest-cni-114926"
	I0826 12:30:30.039309  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.039312  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.039372  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.039380  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.039388  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.039633  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.039647  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.039752  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.039768  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.046392  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.046428  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.046759  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.046780  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.046794  160268 main.go:141] libmachine: (newest-cni-114926) DBG | Closing plugin on server side
	I0826 12:30:30.571454  160268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.23276219s)
	I0826 12:30:30.571532  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.571551  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.571900  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.571923  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.571934  160268 main.go:141] libmachine: Making call to close driver server
	I0826 12:30:30.571950  160268 main.go:141] libmachine: (newest-cni-114926) Calling .Close
	I0826 12:30:30.572212  160268 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:30:30.572240  160268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:30:30.574110  160268 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-114926 addons enable metrics-server
	
	I0826 12:30:30.575832  160268 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0826 12:30:30.577408  160268 addons.go:510] duration metric: took 3.244966784s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0826 12:30:30.577480  160268 start.go:246] waiting for cluster config update ...
	I0826 12:30:30.577499  160268 start.go:255] writing updated cluster config ...
	I0826 12:30:30.577772  160268 ssh_runner.go:195] Run: rm -f paused
	I0826 12:30:30.629989  160268 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:30:30.632028  160268 out.go:177] * Done! kubectl is now configured to use "newest-cni-114926" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.071632648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f319d65f-0951-412e-a79c-b71736921bf0 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.073612453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c25b707b-29c8-4ce1-bfaa-ba6cb19e304f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.074363254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675436074329605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c25b707b-29c8-4ce1-bfaa-ba6cb19e304f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.075593039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3693dd10-fd6a-446c-913d-fef8460528ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.075680031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3693dd10-fd6a-446c-913d-fef8460528ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.075985457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4,PodSandboxId:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674550706573722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202,PodSandboxId:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550245085106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0,PodSandboxId:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550068524449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
cfb870-46aa-4ec1-b958-707896e53120,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b,PodSandboxId:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724674549594611318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981,PodSandboxId:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674538867793972,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340,PodSandboxId:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674538873045048,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4,PodSandboxId:8772aac82dc9becc39dd4c3f23175ca78021164a7a97391fdbe4d18fc6074a90,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674538811016776,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb,PodSandboxId:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674538772351568,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394,PodSandboxId:826b836ede432ffeb4cecf8cfff45582044a10ba5146b3574790c5273cedba0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674251468082178,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3693dd10-fd6a-446c-913d-fef8460528ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.082621171Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=344ecd18-576e-44af-b996-647cf6bcf98a name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.082937894Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b0640b7f-39d3-4fb1-b78c-2f1f970646ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674550578786183,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-26T12:15:50.270654963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2153a5d506441c2dfe3a5fddd5f845ad1c74c19c88b1ac83a30ef59ad33eda5,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-gmfbr,Uid:558889e1-e85a-45ef-9636-892204c4cf48,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674550166397984,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-gmfbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558889e1-e85a-45ef-9636-892204c4cf48
,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:15:49.853184685Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-8489w,Uid:2bcfb870-46aa-4ec1-b958-707896e53120,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674549594124622,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcfb870-46aa-4ec1-b958-707896e53120,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:15:49.285192485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-wnd26,Uid:94b517df-9201-4602-
a58f-77617a38d641,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674549558540051,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:15:49.251563909Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&PodSandboxMetadata{Name:kube-proxy-gwj5w,Uid:18bfe796-2c64-420d-a01d-ea68c56573c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674549335905084,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-26T12:15:49.014214683Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-956479,Uid:c858f6a584517160d9207cc49df9c77b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724674538620889711,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.213:8443,kubernetes.io/config.hash: c858f6a584517160d9207cc49df9c77b,kubernetes.io/config.seen: 2024-08-26T12:15:38.168373687Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8772aac82dc9becc39dd4c3f23175ca
78021164a7a97391fdbe4d18fc6074a90,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-956479,Uid:6880a49a44beb7e7c7e14fe0baab6d74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674538616100343,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.213:2379,kubernetes.io/config.hash: 6880a49a44beb7e7c7e14fe0baab6d74,kubernetes.io/config.seen: 2024-08-26T12:15:38.168372521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-956479,Uid:87fed30611b82eae5e5fa8ea1240838d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674538603318874,Labels:map[str
ing]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87fed30611b82eae5e5fa8ea1240838d,kubernetes.io/config.seen: 2024-08-26T12:15:38.168365679Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-956479,Uid:e28c62c00ab6b72465e92210eaf48849,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724674538598779652,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: e28c62c00ab6b72465e92210eaf48849,kubernetes.io/config.seen: 2024-08-26T12:15:38.168370232Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=344ecd18-576e-44af-b996-647cf6bcf98a name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.083611436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=840b05fc-e8c3-421c-a7c0-d39f6d1049f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.083691819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=840b05fc-e8c3-421c-a7c0-d39f6d1049f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.084643192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4,PodSandboxId:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674550706573722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202,PodSandboxId:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550245085106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0,PodSandboxId:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550068524449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
cfb870-46aa-4ec1-b958-707896e53120,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b,PodSandboxId:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724674549594611318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981,PodSandboxId:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674538867793972,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340,PodSandboxId:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674538873045048,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4,PodSandboxId:8772aac82dc9becc39dd4c3f23175ca78021164a7a97391fdbe4d18fc6074a90,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674538811016776,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb,PodSandboxId:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674538772351568,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=840b05fc-e8c3-421c-a7c0-d39f6d1049f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.115500854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4009293-f6c7-4182-9390-47eb88cea530 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.115579977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4009293-f6c7-4182-9390-47eb88cea530 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.116593211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cb6e449-57e2-4176-9361-79a6b5b1aaae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.117068785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675436117043759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cb6e449-57e2-4176-9361-79a6b5b1aaae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.117505169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a861047-9daa-4e54-9473-90d565dbcba2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.117579744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a861047-9daa-4e54-9473-90d565dbcba2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.117965043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4,PodSandboxId:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674550706573722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202,PodSandboxId:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550245085106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0,PodSandboxId:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550068524449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
cfb870-46aa-4ec1-b958-707896e53120,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b,PodSandboxId:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724674549594611318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981,PodSandboxId:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674538867793972,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340,PodSandboxId:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674538873045048,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4,PodSandboxId:8772aac82dc9becc39dd4c3f23175ca78021164a7a97391fdbe4d18fc6074a90,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674538811016776,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb,PodSandboxId:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674538772351568,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394,PodSandboxId:826b836ede432ffeb4cecf8cfff45582044a10ba5146b3574790c5273cedba0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674251468082178,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a861047-9daa-4e54-9473-90d565dbcba2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.152167880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7af6d699-40b9-447c-a91b-0b18fdd71005 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.152240524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7af6d699-40b9-447c-a91b-0b18fdd71005 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.153312282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c955e94-303a-4571-9dae-3ae16b89b582 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.153668436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675436153645200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c955e94-303a-4571-9dae-3ae16b89b582 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.154226654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=127f16c5-6fd8-4dcb-9708-078e51c42376 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.154288452Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=127f16c5-6fd8-4dcb-9708-078e51c42376 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:30:36 no-preload-956479 crio[728]: time="2024-08-26 12:30:36.154485008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4,PodSandboxId:1e650c98bccdbd7382de1acb7bd00441c4f0b00ea02735e6ca782f3d122528b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724674550706573722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0640b7f-39d3-4fb1-b78c-2f1f970646ae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202,PodSandboxId:2997d8433b41643ea0759e1098a1835c30ded95eea26ef528dc0124d91f7d50c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550245085106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wnd26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b517df-9201-4602-a58f-77617a38d641,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0,PodSandboxId:0e1006f2fd77ef8dda4cd5010d346687a17aea69444873336578a4a7f961b417,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724674550068524449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8489w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
cfb870-46aa-4ec1-b958-707896e53120,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b,PodSandboxId:f6061a901467f39c72965bea5ebb803bc3f9f7568dbabc84553d94b87b3da9fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724674549594611318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwj5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18bfe796-2c64-420d-a01d-ea68c56573c7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981,PodSandboxId:29558bf11d3b516ec586e2cb66c1854ac9d28c98bd1033dcbe712b7c7921d288,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724674538867793972,Labels:map
[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e28c62c00ab6b72465e92210eaf48849,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340,PodSandboxId:e2502d0452fa20bc35766e598662efe03af8bbb80846f9ea9e7740e97175d251,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724674538873045048,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4,PodSandboxId:8772aac82dc9becc39dd4c3f23175ca78021164a7a97391fdbe4d18fc6074a90,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724674538811016776,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6880a49a44beb7e7c7e14fe0baab6d74,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb,PodSandboxId:86c389187db4d9a19ab84598b5e74c03a3d1aa19e7070ede75fb123cb3bd057b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724674538772351568,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fed30611b82eae5e5fa8ea1240838d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394,PodSandboxId:826b836ede432ffeb4cecf8cfff45582044a10ba5146b3574790c5273cedba0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724674251468082178,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-956479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c858f6a584517160d9207cc49df9c77b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=127f16c5-6fd8-4dcb-9708-078e51c42376 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1cb06f1e6077d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   1e650c98bccdb       storage-provisioner
	4c5433ef50979       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   2997d8433b416       coredns-6f6b679f8f-wnd26
	d7f33f4691468       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   0e1006f2fd77e       coredns-6f6b679f8f-8489w
	2f7d6667cb757       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   f6061a901467f       kube-proxy-gwj5w
	42327b5ac7970       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Running             kube-apiserver            2                   e2502d0452fa2       kube-apiserver-no-preload-956479
	2f6478fc5d177       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   14 minutes ago      Running             kube-scheduler            2                   29558bf11d3b5       kube-scheduler-no-preload-956479
	a1149aeff78c8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   8772aac82dc9b       etcd-no-preload-956479
	6e0bad9bca873       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   14 minutes ago      Running             kube-controller-manager   2                   86c389187db4d       kube-controller-manager-no-preload-956479
	2aae61c21df49       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   826b836ede432       kube-apiserver-no-preload-956479
	
	
	==> coredns [4c5433ef5097956d5c3d41db078692f12961e502b9943a9294b0e521e146d202] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d7f33f469146892940bf72bdc4c2a96b4b381c0b87009d2fac5a384b57002fa0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-956479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-956479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff
	                    minikube.k8s.io/name=no-preload-956479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_26T12_15_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Aug 2024 12:15:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-956479
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Aug 2024 12:30:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Aug 2024 12:26:06 +0000   Mon, 26 Aug 2024 12:15:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Aug 2024 12:26:06 +0000   Mon, 26 Aug 2024 12:15:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Aug 2024 12:26:06 +0000   Mon, 26 Aug 2024 12:15:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Aug 2024 12:26:06 +0000   Mon, 26 Aug 2024 12:15:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.213
	  Hostname:    no-preload-956479
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f9d2e930ac64583a71d5c8ed83b972c
	  System UUID:                0f9d2e93-0ac6-4583-a71d-5c8ed83b972c
	  Boot ID:                    ec17325c-254e-4dd8-a77b-56f28d12a1f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-8489w                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-wnd26                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-956479                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-956479             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-956479    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-gwj5w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-956479             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-gmfbr              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-956479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-956479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-956479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-956479 event: Registered Node no-preload-956479 in Controller
	
	
	==> dmesg <==
	[  +0.052280] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038819] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.067980] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.974609] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.443330] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.392902] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.074602] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069160] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.202677] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.121597] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.297557] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[ +15.783261] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.063537] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.605503] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +3.557868] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.859554] kauditd_printk_skb: 91 callbacks suppressed
	[Aug26 12:15] systemd-fstab-generator[3073]: Ignoring "noauto" option for root device
	[  +0.067447] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.999881] systemd-fstab-generator[3397]: Ignoring "noauto" option for root device
	[  +0.083516] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.787589] systemd-fstab-generator[3522]: Ignoring "noauto" option for root device
	[  +0.842382] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.391489] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [a1149aeff78c84f53b4ec3a5a47e94e5e983994802445e19f7e0649cb4cb81e4] <==
	{"level":"info","ts":"2024-08-26T12:15:39.795040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgPreVoteResp from afd31c34526e5864 at term 1"}
	{"level":"info","ts":"2024-08-26T12:15:39.795106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became candidate at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:39.795126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgVoteResp from afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:39.795138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became leader at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:39.795147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: afd31c34526e5864 elected leader afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-08-26T12:15:39.799976Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:39.804046Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"afd31c34526e5864","local-member-attributes":"{Name:no-preload-956479 ClientURLs:[https://192.168.50.213:2379]}","request-path":"/0/members/afd31c34526e5864/attributes","cluster-id":"64fdbb8e23141dc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-26T12:15:39.804096Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:15:39.805057Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-26T12:15:39.815007Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:15:39.818722Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64fdbb8e23141dc5","local-member-id":"afd31c34526e5864","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:39.818869Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:39.818896Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-26T12:15:39.820344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.213:2379"}
	{"level":"info","ts":"2024-08-26T12:15:39.816840Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-26T12:15:39.823094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-26T12:15:39.818433Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-26T12:15:39.825158Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-26T12:25:39.929779Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-08-26T12:25:39.942085Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":687,"took":"11.59513ms","hash":317937888,"current-db-size-bytes":2347008,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2347008,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-26T12:25:39.942238Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":317937888,"revision":687,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-26T12:29:28.518376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.334506ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6369375814714052896 > lease_revoke:<id:5864918e9cfed0c5>","response":"size:28"}
	{"level":"warn","ts":"2024-08-26T12:30:21.894313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.657374ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-26T12:30:21.894855Z","caller":"traceutil/trace.go:171","msg":"trace[701857011] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1159; }","duration":"127.285767ms","start":"2024-08-26T12:30:21.767533Z","end":"2024-08-26T12:30:21.894819Z","steps":["trace[701857011] 'range keys from in-memory index tree'  (duration: 126.63815ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-26T12:30:22.501383Z","caller":"traceutil/trace.go:171","msg":"trace[1227826346] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"180.827576ms","start":"2024-08-26T12:30:22.320508Z","end":"2024-08-26T12:30:22.501336Z","steps":["trace[1227826346] 'process raft request'  (duration: 180.401207ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:30:36 up 20 min,  0 users,  load average: 0.18, 0.15, 0.16
	Linux no-preload-956479 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2aae61c21df49288f1140ba91704ed1c7a467319d2c2ec914d47a10430594394] <==
	W0826 12:15:31.560591       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.574395       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.594996       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.635171       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.650888       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.669709       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.713234       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.719933       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.722519       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.724066       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.926024       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:31.947165       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.075241       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.087035       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.269054       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.283013       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.424180       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.622810       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:32.927198       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:34.724018       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.086082       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.089865       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.183644       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.221791       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0826 12:15:36.306035       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [42327b5ac7970322721bfb3c7a8024f8c4a858b6feac362120d72b5148868340] <==
	W0826 12:25:42.378615       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:25:42.378702       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:25:42.379845       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:25:42.379887       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:26:42.381100       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:26:42.381370       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0826 12:26:42.381183       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:26:42.381487       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0826 12:26:42.382670       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:26:42.382804       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0826 12:28:42.383281       1 handler_proxy.go:99] no RequestInfo found in the context
	W0826 12:28:42.383282       1 handler_proxy.go:99] no RequestInfo found in the context
	E0826 12:28:42.384170       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0826 12:28:42.384184       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0826 12:28:42.385324       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0826 12:28:42.385343       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6e0bad9bca8735c306210126a4d10fb566201611564be7696222f18ed769edeb] <==
	E0826 12:25:18.492540       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:25:18.974195       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:25:48.500051       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:25:48.983247       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:26:06.862267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-956479"
	E0826 12:26:18.506124       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:26:18.991855       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:26:37.106180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="157.963µs"
	E0826 12:26:48.514225       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:26:49.000026       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0826 12:26:51.101128       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.873µs"
	E0826 12:27:18.520317       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:27:19.008395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:27:48.527389       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:27:49.018257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:28:18.534234       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:28:19.027714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:28:48.540697       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:28:49.037495       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:29:18.548325       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:29:19.047003       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:29:48.555328       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:29:49.056098       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0826 12:30:18.563488       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0826 12:30:19.065113       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2f7d6667cb757875ca1d9a31691f9215ae0d9a4aee5e5ccf20d302881d3afb0b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0826 12:15:50.569922       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0826 12:15:50.607524       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.213"]
	E0826 12:15:50.607624       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0826 12:15:50.659546       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0826 12:15:50.659669       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0826 12:15:50.659715       1 server_linux.go:169] "Using iptables Proxier"
	I0826 12:15:50.669431       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0826 12:15:50.669829       1 server.go:483] "Version info" version="v1.31.0"
	I0826 12:15:50.669864       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0826 12:15:50.671362       1 config.go:197] "Starting service config controller"
	I0826 12:15:50.671514       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0826 12:15:50.671472       1 config.go:104] "Starting endpoint slice config controller"
	I0826 12:15:50.671623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0826 12:15:50.672208       1 config.go:326] "Starting node config controller"
	I0826 12:15:50.672548       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0826 12:15:50.772760       1 shared_informer.go:320] Caches are synced for service config
	I0826 12:15:50.772826       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0826 12:15:50.773131       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2f6478fc5d177533c71e78862a8b70569bc5a1542e92f61afd6476aa7e865981] <==
	W0826 12:15:41.462239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:41.462266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:41.462328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0826 12:15:41.462354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:41.462399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:41.462425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.265636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:42.265691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.326948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0826 12:15:42.327001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.327134       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0826 12:15:42.327200       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0826 12:15:42.331351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0826 12:15:42.331405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.393581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0826 12:15:42.393684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.494606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0826 12:15:42.494657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.616060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0826 12:15:42.616204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.753247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0826 12:15:42.753394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0826 12:15:42.807202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0826 12:15:42.807690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0826 12:15:44.147362       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 26 12:29:29 no-preload-956479 kubelet[3404]: E0826 12:29:29.083033    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:29:34 no-preload-956479 kubelet[3404]: E0826 12:29:34.352899    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675374352076498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:29:34 no-preload-956479 kubelet[3404]: E0826 12:29:34.353312    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675374352076498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:29:41 no-preload-956479 kubelet[3404]: E0826 12:29:41.083128    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:29:44 no-preload-956479 kubelet[3404]: E0826 12:29:44.151555    3404 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 26 12:29:44 no-preload-956479 kubelet[3404]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 26 12:29:44 no-preload-956479 kubelet[3404]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 26 12:29:44 no-preload-956479 kubelet[3404]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 26 12:29:44 no-preload-956479 kubelet[3404]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 26 12:29:44 no-preload-956479 kubelet[3404]: E0826 12:29:44.356461    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675384355831695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:29:44 no-preload-956479 kubelet[3404]: E0826 12:29:44.356531    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675384355831695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:29:54 no-preload-956479 kubelet[3404]: E0826 12:29:54.086196    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:29:54 no-preload-956479 kubelet[3404]: E0826 12:29:54.359232    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675394358702596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:29:54 no-preload-956479 kubelet[3404]: E0826 12:29:54.359330    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675394358702596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:04 no-preload-956479 kubelet[3404]: E0826 12:30:04.361608    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675404361148865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:04 no-preload-956479 kubelet[3404]: E0826 12:30:04.362490    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675404361148865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:06 no-preload-956479 kubelet[3404]: E0826 12:30:06.085042    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:30:14 no-preload-956479 kubelet[3404]: E0826 12:30:14.364790    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675414364135057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:14 no-preload-956479 kubelet[3404]: E0826 12:30:14.364873    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675414364135057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:20 no-preload-956479 kubelet[3404]: E0826 12:30:20.082598    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:30:24 no-preload-956479 kubelet[3404]: E0826 12:30:24.366965    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675424366407956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:24 no-preload-956479 kubelet[3404]: E0826 12:30:24.367022    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675424366407956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:34 no-preload-956479 kubelet[3404]: E0826 12:30:34.086500    3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gmfbr" podUID="558889e1-e85a-45ef-9636-892204c4cf48"
	Aug 26 12:30:34 no-preload-956479 kubelet[3404]: E0826 12:30:34.370795    3404 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675434369921836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 26 12:30:34 no-preload-956479 kubelet[3404]: E0826 12:30:34.370863    3404 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675434369921836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1cb06f1e6077d9cf9634078bf9a668387d1f8fe587adbdbbb1e804bf713c06b4] <==
	I0826 12:15:50.791670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0826 12:15:50.813484       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0826 12:15:50.813673       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0826 12:15:50.823654       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0826 12:15:50.823899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-956479_e8265ada-0674-4eb5-8dde-f2566602131e!
	I0826 12:15:50.825150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47b77d2e-e671-41ab-a057-7c43e509713c", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-956479_e8265ada-0674-4eb5-8dde-f2566602131e became leader
	I0826 12:15:50.924050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-956479_e8265ada-0674-4eb5-8dde-f2566602131e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-956479 -n no-preload-956479
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-956479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gmfbr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-956479 describe pod metrics-server-6867b74b74-gmfbr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-956479 describe pod metrics-server-6867b74b74-gmfbr: exit status 1 (73.519043ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gmfbr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-956479 describe pod metrics-server-6867b74b74-gmfbr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (335.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (111.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
E0826 12:27:20.477437  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.136:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 2 (248.776042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-839656" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-839656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-839656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.788µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-839656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 2 (238.484387ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-839656 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-839656 logs -n 25: (1.645713712s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-117510                           | kubernetes-upgrade-117510    | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-585941                                        | pause-585941                 | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:00 UTC |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:00 UTC | 26 Aug 24 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:01 UTC | 26 Aug 24 12:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-956479             | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-923586            | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC | 26 Aug 24 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-156240                              | cert-expiration-156240       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148783 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:03 UTC |
	|         | disable-driver-mounts-148783                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC | 26 Aug 24 12:04 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-839656        | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-697869  | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-956479                  | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-923586                 | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-956479                                   | no-preload-956479            | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-923586                                  | embed-certs-923586           | jenkins | v1.33.1 | 26 Aug 24 12:04 UTC | 26 Aug 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-839656             | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC | 26 Aug 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-839656                              | old-k8s-version-839656       | jenkins | v1.33.1 | 26 Aug 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697869       | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697869 | jenkins | v1.33.1 | 26 Aug 24 12:06 UTC | 26 Aug 24 12:15 UTC |
	|         | default-k8s-diff-port-697869                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 12:06:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 12:06:55.804794  153366 out.go:345] Setting OutFile to fd 1 ...
	I0826 12:06:55.805114  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805125  153366 out.go:358] Setting ErrFile to fd 2...
	I0826 12:06:55.805129  153366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 12:06:55.805378  153366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 12:06:55.806009  153366 out.go:352] Setting JSON to false
	I0826 12:06:55.806989  153366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6557,"bootTime":1724667459,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 12:06:55.807056  153366 start.go:139] virtualization: kvm guest
	I0826 12:06:55.809200  153366 out.go:177] * [default-k8s-diff-port-697869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 12:06:55.810757  153366 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 12:06:55.810779  153366 notify.go:220] Checking for updates...
	I0826 12:06:55.813352  153366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 12:06:55.814876  153366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:06:55.816231  153366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 12:06:55.817536  153366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 12:06:55.819049  153366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 12:06:55.820974  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:06:55.821368  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.821428  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.837973  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0826 12:06:55.838484  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.839113  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.839132  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.839537  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.839758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.840059  153366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 12:06:55.840392  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:06:55.840446  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:06:55.855990  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0826 12:06:55.856535  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:06:55.857044  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:06:55.857070  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:06:55.857398  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:06:55.857606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:06:55.892165  153366 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 12:06:55.893462  153366 start.go:297] selected driver: kvm2
	I0826 12:06:55.893491  153366 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.893612  153366 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 12:06:55.894295  153366 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.894372  153366 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 12:06:55.911403  153366 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 12:06:55.911782  153366 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:06:55.911825  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:06:55.911833  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:06:55.911942  153366 start.go:340] cluster config:
	{Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:06:55.912047  153366 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 12:06:55.914819  153366 out.go:177] * Starting "default-k8s-diff-port-697869" primary control-plane node in "default-k8s-diff-port-697869" cluster
	I0826 12:06:58.095139  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:06:55.916120  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:06:55.916158  153366 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 12:06:55.916168  153366 cache.go:56] Caching tarball of preloaded images
	I0826 12:06:55.916249  153366 preload.go:172] Found /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0826 12:06:55.916260  153366 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0826 12:06:55.916361  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:06:55.916578  153366 start.go:360] acquireMachinesLock for default-k8s-diff-port-697869: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:07:01.167159  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:07.247157  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:10.319093  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:16.399177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:19.471168  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:25.551154  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:28.623156  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:34.703152  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:37.775237  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:43.855164  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:46.927177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:53.007138  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:07:56.079172  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:02.159134  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:05.231114  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:11.311126  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:14.383170  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:20.463130  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:23.535190  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:29.615145  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:32.687246  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:38.767150  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:41.839214  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:47.919149  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:50.991177  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:08:57.071142  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:00.143127  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:06.223158  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:09.295167  152463 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.213:22: connect: no route to host
	I0826 12:09:12.299677  152550 start.go:364] duration metric: took 4m34.363707329s to acquireMachinesLock for "embed-certs-923586"
	I0826 12:09:12.299740  152550 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:12.299746  152550 fix.go:54] fixHost starting: 
	I0826 12:09:12.300074  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:12.300107  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:12.316195  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0826 12:09:12.316679  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:12.317193  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:09:12.317222  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:12.317544  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:12.317738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:12.317890  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:09:12.319718  152550 fix.go:112] recreateIfNeeded on embed-certs-923586: state=Stopped err=<nil>
	I0826 12:09:12.319757  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	W0826 12:09:12.319928  152550 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:12.322756  152550 out.go:177] * Restarting existing kvm2 VM for "embed-certs-923586" ...
	I0826 12:09:12.324242  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Start
	I0826 12:09:12.324436  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring networks are active...
	I0826 12:09:12.325340  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network default is active
	I0826 12:09:12.325727  152550 main.go:141] libmachine: (embed-certs-923586) Ensuring network mk-embed-certs-923586 is active
	I0826 12:09:12.326016  152550 main.go:141] libmachine: (embed-certs-923586) Getting domain xml...
	I0826 12:09:12.326704  152550 main.go:141] libmachine: (embed-certs-923586) Creating domain...
	I0826 12:09:12.297008  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:12.297049  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297404  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:09:12.297433  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:09:12.297769  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:09:12.299520  152463 machine.go:96] duration metric: took 4m37.402469334s to provisionDockerMachine
	I0826 12:09:12.299563  152463 fix.go:56] duration metric: took 4m37.426061512s for fixHost
	I0826 12:09:12.299570  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 4m37.426083406s
	W0826 12:09:12.299602  152463 start.go:714] error starting host: provision: host is not running
	W0826 12:09:12.299700  152463 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0826 12:09:12.299714  152463 start.go:729] Will try again in 5 seconds ...
	I0826 12:09:13.587774  152550 main.go:141] libmachine: (embed-certs-923586) Waiting to get IP...
	I0826 12:09:13.588804  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.589502  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.589606  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.589472  153863 retry.go:31] will retry after 233.612197ms: waiting for machine to come up
	I0826 12:09:13.825289  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:13.825694  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:13.825716  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:13.825640  153863 retry.go:31] will retry after 278.757003ms: waiting for machine to come up
	I0826 12:09:14.106215  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.106555  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.106604  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.106513  153863 retry.go:31] will retry after 438.455545ms: waiting for machine to come up
	I0826 12:09:14.546036  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:14.546434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:14.546461  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:14.546390  153863 retry.go:31] will retry after 471.25312ms: waiting for machine to come up
	I0826 12:09:15.019018  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.019413  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.019441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.019398  153863 retry.go:31] will retry after 547.251596ms: waiting for machine to come up
	I0826 12:09:15.568156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:15.568417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:15.568446  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:15.568366  153863 retry.go:31] will retry after 602.422279ms: waiting for machine to come up
	I0826 12:09:16.172056  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:16.172588  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:16.172613  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:16.172520  153863 retry.go:31] will retry after 990.562884ms: waiting for machine to come up
	I0826 12:09:17.164920  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:17.165417  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:17.165441  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:17.165361  153863 retry.go:31] will retry after 1.291254906s: waiting for machine to come up
	I0826 12:09:17.301413  152463 start.go:360] acquireMachinesLock for no-preload-956479: {Name:mk4886b5ecbc273cdbec9438c757b373d75bc166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0826 12:09:18.458402  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:18.458881  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:18.458913  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:18.458796  153863 retry.go:31] will retry after 1.757955514s: waiting for machine to come up
	I0826 12:09:20.218876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:20.219306  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:20.219329  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:20.219276  153863 retry.go:31] will retry after 1.629705685s: waiting for machine to come up
	I0826 12:09:21.850442  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:21.850858  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:21.850889  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:21.850800  153863 retry.go:31] will retry after 2.281035685s: waiting for machine to come up
	I0826 12:09:24.133867  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:24.134245  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:24.134273  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:24.134193  153863 retry.go:31] will retry after 3.498910639s: waiting for machine to come up
	I0826 12:09:27.635304  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:27.635727  152550 main.go:141] libmachine: (embed-certs-923586) DBG | unable to find current IP address of domain embed-certs-923586 in network mk-embed-certs-923586
	I0826 12:09:27.635762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | I0826 12:09:27.635665  153863 retry.go:31] will retry after 3.250723751s: waiting for machine to come up
	I0826 12:09:32.191598  152982 start.go:364] duration metric: took 3m50.364189217s to acquireMachinesLock for "old-k8s-version-839656"
	I0826 12:09:32.191690  152982 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:32.191702  152982 fix.go:54] fixHost starting: 
	I0826 12:09:32.192120  152982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:32.192160  152982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:32.209470  152982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0826 12:09:32.209924  152982 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:32.210423  152982 main.go:141] libmachine: Using API Version  1
	I0826 12:09:32.210446  152982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:32.210781  152982 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:32.210982  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:32.211153  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetState
	I0826 12:09:32.212801  152982 fix.go:112] recreateIfNeeded on old-k8s-version-839656: state=Stopped err=<nil>
	I0826 12:09:32.212839  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	W0826 12:09:32.213022  152982 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:32.215081  152982 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-839656" ...
	I0826 12:09:30.890060  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890595  152550 main.go:141] libmachine: (embed-certs-923586) Found IP for machine: 192.168.39.6
	I0826 12:09:30.890628  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has current primary IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.890642  152550 main.go:141] libmachine: (embed-certs-923586) Reserving static IP address...
	I0826 12:09:30.891114  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.891138  152550 main.go:141] libmachine: (embed-certs-923586) DBG | skip adding static IP to network mk-embed-certs-923586 - found existing host DHCP lease matching {name: "embed-certs-923586", mac: "52:54:00:2e:e9:ed", ip: "192.168.39.6"}
	I0826 12:09:30.891148  152550 main.go:141] libmachine: (embed-certs-923586) Reserved static IP address: 192.168.39.6
	I0826 12:09:30.891160  152550 main.go:141] libmachine: (embed-certs-923586) Waiting for SSH to be available...
	I0826 12:09:30.891171  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Getting to WaitForSSH function...
	I0826 12:09:30.893189  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893470  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:30.893500  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:30.893616  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH client type: external
	I0826 12:09:30.893640  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa (-rw-------)
	I0826 12:09:30.893682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:30.893696  152550 main.go:141] libmachine: (embed-certs-923586) DBG | About to run SSH command:
	I0826 12:09:30.893714  152550 main.go:141] libmachine: (embed-certs-923586) DBG | exit 0
	I0826 12:09:31.014809  152550 main.go:141] libmachine: (embed-certs-923586) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:31.015188  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetConfigRaw
	I0826 12:09:31.015829  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.018458  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.018812  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.018855  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.019100  152550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/config.json ...
	I0826 12:09:31.019329  152550 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:31.019348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.019561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.021826  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022132  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.022156  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.022293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.022460  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022622  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.022733  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.022906  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.023108  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.023121  152550 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:31.123039  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:31.123080  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123410  152550 buildroot.go:166] provisioning hostname "embed-certs-923586"
	I0826 12:09:31.123443  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.123738  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.126455  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126777  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.126814  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.126922  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.127161  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127351  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.127522  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.127719  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.127909  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.127924  152550 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-923586 && echo "embed-certs-923586" | sudo tee /etc/hostname
	I0826 12:09:31.240946  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-923586
	
	I0826 12:09:31.240981  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.243695  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244041  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.244079  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.244240  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.244453  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244617  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.244742  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.244900  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.245095  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.245113  152550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-923586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-923586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-923586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:31.355875  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:31.355909  152550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:31.355933  152550 buildroot.go:174] setting up certificates
	I0826 12:09:31.355947  152550 provision.go:84] configureAuth start
	I0826 12:09:31.355960  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetMachineName
	I0826 12:09:31.356300  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:31.359092  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.359407  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.359596  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.362078  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362396  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.362429  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.362538  152550 provision.go:143] copyHostCerts
	I0826 12:09:31.362632  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:31.362656  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:31.362743  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:31.362888  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:31.362900  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:31.362939  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:31.363021  152550 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:31.363031  152550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:31.363065  152550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:31.363135  152550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.embed-certs-923586 san=[127.0.0.1 192.168.39.6 embed-certs-923586 localhost minikube]
	I0826 12:09:31.549410  152550 provision.go:177] copyRemoteCerts
	I0826 12:09:31.549482  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:31.549517  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.552293  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552647  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.552681  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.552914  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.553119  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.553276  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.553416  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:31.633032  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:31.657117  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:09:31.680707  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:31.703441  152550 provision.go:87] duration metric: took 347.478825ms to configureAuth
	I0826 12:09:31.703477  152550 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:31.703678  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:09:31.703752  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.706384  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.706876  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.706909  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.707110  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.707364  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707561  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.707762  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.708005  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:31.708232  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:31.708252  152550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:31.963380  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:31.963417  152550 machine.go:96] duration metric: took 944.071305ms to provisionDockerMachine
	I0826 12:09:31.963435  152550 start.go:293] postStartSetup for "embed-certs-923586" (driver="kvm2")
	I0826 12:09:31.963452  152550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:31.963481  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:31.963878  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:31.963913  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:31.966558  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.966981  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:31.967010  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:31.967186  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:31.967413  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:31.967587  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:31.967732  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.049232  152550 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:32.053165  152550 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:32.053195  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:32.053278  152550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:32.053378  152550 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:32.053495  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:32.062420  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:32.085277  152550 start.go:296] duration metric: took 121.824784ms for postStartSetup
	I0826 12:09:32.085335  152550 fix.go:56] duration metric: took 19.785587858s for fixHost
	I0826 12:09:32.085362  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.088039  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088332  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.088360  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.088560  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.088832  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089012  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.089191  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.089365  152550 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:32.089529  152550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0826 12:09:32.089539  152550 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:32.191413  152550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674172.168471460
	
	I0826 12:09:32.191440  152550 fix.go:216] guest clock: 1724674172.168471460
	I0826 12:09:32.191450  152550 fix.go:229] Guest: 2024-08-26 12:09:32.16847146 +0000 UTC Remote: 2024-08-26 12:09:32.085340981 +0000 UTC m=+294.301169364 (delta=83.130479ms)
	I0826 12:09:32.191485  152550 fix.go:200] guest clock delta is within tolerance: 83.130479ms
	I0826 12:09:32.191493  152550 start.go:83] releasing machines lock for "embed-certs-923586", held for 19.891774014s
	I0826 12:09:32.191526  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.191861  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:32.194589  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.194980  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.195019  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.195207  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.195866  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196071  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:09:32.196167  152550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:32.196288  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.196319  152550 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:32.196348  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:09:32.199088  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199546  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.199598  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199682  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.199776  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.199977  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200105  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:32.200124  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:32.200148  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200317  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:09:32.200367  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.200482  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:09:32.200663  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:09:32.200824  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:09:32.285244  152550 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:32.317027  152550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:32.466233  152550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:32.472677  152550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:32.472768  152550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:32.490080  152550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:32.490111  152550 start.go:495] detecting cgroup driver to use...
	I0826 12:09:32.490189  152550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:32.509031  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:32.524361  152550 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:32.524417  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:32.539259  152550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:32.553276  152550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:32.676018  152550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:32.833702  152550 docker.go:233] disabling docker service ...
	I0826 12:09:32.833779  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:32.851253  152550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:32.865578  152550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:33.000922  152550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:33.129916  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:33.144209  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:33.162946  152550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:09:33.163010  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.174271  152550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:33.174360  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.189085  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.204388  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.218151  152550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:33.234931  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.257016  152550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.280905  152550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:33.293033  152550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:33.303161  152550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:33.303235  152550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:33.316560  152550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:33.326319  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:33.449279  152550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:33.587642  152550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:33.587722  152550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:33.592423  152550 start.go:563] Will wait 60s for crictl version
	I0826 12:09:33.592495  152550 ssh_runner.go:195] Run: which crictl
	I0826 12:09:33.596628  152550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:33.633109  152550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:33.633225  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.661128  152550 ssh_runner.go:195] Run: crio --version
	I0826 12:09:33.692222  152550 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:09:32.216396  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .Start
	I0826 12:09:32.216630  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring networks are active...
	I0826 12:09:32.217414  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network default is active
	I0826 12:09:32.217851  152982 main.go:141] libmachine: (old-k8s-version-839656) Ensuring network mk-old-k8s-version-839656 is active
	I0826 12:09:32.218286  152982 main.go:141] libmachine: (old-k8s-version-839656) Getting domain xml...
	I0826 12:09:32.219128  152982 main.go:141] libmachine: (old-k8s-version-839656) Creating domain...
	I0826 12:09:33.500501  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting to get IP...
	I0826 12:09:33.501678  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.502100  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.502202  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.502072  154009 retry.go:31] will retry after 193.282008ms: waiting for machine to come up
	I0826 12:09:33.697223  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.697688  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.697760  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.697669  154009 retry.go:31] will retry after 252.110347ms: waiting for machine to come up
	I0826 12:09:33.951330  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:33.952639  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:33.952677  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:33.952616  154009 retry.go:31] will retry after 436.954293ms: waiting for machine to come up
	I0826 12:09:34.391109  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.391724  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.391759  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.391676  154009 retry.go:31] will retry after 402.13367ms: waiting for machine to come up
	I0826 12:09:34.795471  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:34.796036  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:34.796060  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:34.795991  154009 retry.go:31] will retry after 738.867168ms: waiting for machine to come up
	I0826 12:09:35.537041  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:35.537518  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:35.537539  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:35.537476  154009 retry.go:31] will retry after 884.001928ms: waiting for machine to come up
	I0826 12:09:36.423984  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:36.424400  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:36.424432  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:36.424336  154009 retry.go:31] will retry after 958.887984ms: waiting for machine to come up
	I0826 12:09:33.693650  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetIP
	I0826 12:09:33.696950  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:09:33.697385  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:09:33.697661  152550 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:33.701975  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:33.715404  152550 kubeadm.go:883] updating cluster {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:33.715541  152550 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:09:33.715646  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:33.756477  152550 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:09:33.756546  152550 ssh_runner.go:195] Run: which lz4
	I0826 12:09:33.761027  152550 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:33.765139  152550 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:33.765181  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:09:35.106552  152550 crio.go:462] duration metric: took 1.345552742s to copy over tarball
	I0826 12:09:35.106656  152550 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:37.299491  152550 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.192805053s)
	I0826 12:09:37.299548  152550 crio.go:469] duration metric: took 2.192938832s to extract the tarball
	I0826 12:09:37.299560  152550 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:37.337654  152550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:37.378117  152550 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:09:37.378144  152550 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:09:37.378155  152550 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0826 12:09:37.378276  152550 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-923586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:37.378375  152550 ssh_runner.go:195] Run: crio config
	I0826 12:09:37.438148  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:37.438182  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:37.438200  152550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:37.438229  152550 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-923586 NodeName:embed-certs-923586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:09:37.438436  152550 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-923586"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:37.438525  152550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:09:37.451742  152550 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:37.451824  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:37.463078  152550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0826 12:09:37.481563  152550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:37.499615  152550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0826 12:09:37.518753  152550 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:37.523612  152550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:37.535774  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:37.664131  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:37.681227  152550 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586 for IP: 192.168.39.6
	I0826 12:09:37.681254  152550 certs.go:194] generating shared ca certs ...
	I0826 12:09:37.681293  152550 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:37.681467  152550 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:37.681529  152550 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:37.681542  152550 certs.go:256] generating profile certs ...
	I0826 12:09:37.681665  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/client.key
	I0826 12:09:37.681751  152550 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key.f0cd25f6
	I0826 12:09:37.681813  152550 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key
	I0826 12:09:37.681967  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:37.682018  152550 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:37.682029  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:37.682064  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:37.682100  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:37.682136  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:37.682199  152550 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:37.683214  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:37.721802  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:37.756110  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:09:37.786038  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:09:37.818026  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0826 12:09:37.385261  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:37.385737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:37.385767  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:37.385679  154009 retry.go:31] will retry after 991.322442ms: waiting for machine to come up
	I0826 12:09:38.379002  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:38.379428  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:38.379457  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:38.379382  154009 retry.go:31] will retry after 1.199531339s: waiting for machine to come up
	I0826 12:09:39.581068  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:39.581551  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:39.581581  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:39.581506  154009 retry.go:31] will retry after 1.74680502s: waiting for machine to come up
	I0826 12:09:41.330775  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:41.331224  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:41.331254  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:41.331170  154009 retry.go:31] will retry after 2.648889988s: waiting for machine to come up
	I0826 12:09:37.843982  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:09:37.869902  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:09:37.893757  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/embed-certs-923586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:09:37.917320  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:09:37.940492  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:09:37.964211  152550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:09:37.987907  152550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:09:38.004414  152550 ssh_runner.go:195] Run: openssl version
	I0826 12:09:38.010144  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:09:38.020820  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025245  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.025324  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:09:38.031174  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:09:38.041847  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:09:38.052764  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057501  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.057591  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:09:38.063840  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:09:38.075173  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:09:38.085770  152550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089921  152550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.089986  152550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:09:38.095373  152550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:09:38.105709  152550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:09:38.110189  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:09:38.115952  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:09:38.121463  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:09:38.127423  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:09:38.132968  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:09:38.138735  152550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:09:38.144517  152550 kubeadm.go:392] StartCluster: {Name:embed-certs-923586 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-923586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:09:38.144671  152550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:09:38.144748  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.179325  152550 cri.go:89] found id: ""
	I0826 12:09:38.179409  152550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:09:38.189261  152550 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:09:38.189296  152550 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:09:38.189368  152550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:09:38.198923  152550 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:09:38.200065  152550 kubeconfig.go:125] found "embed-certs-923586" server: "https://192.168.39.6:8443"
	I0826 12:09:38.202145  152550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:09:38.211371  152550 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.6
	I0826 12:09:38.211415  152550 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:09:38.211431  152550 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:09:38.211501  152550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:09:38.245861  152550 cri.go:89] found id: ""
	I0826 12:09:38.245945  152550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:09:38.262469  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:09:38.272693  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:09:38.272721  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:09:38.272780  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:09:38.281704  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:09:38.281779  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:09:38.291042  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:09:38.299990  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:09:38.300057  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:09:38.309982  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.319474  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:09:38.319536  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:09:38.329345  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:09:38.338548  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:09:38.338649  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:09:38.349124  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:09:38.359112  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:38.470240  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.758142  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.28785788s)
	I0826 12:09:39.758180  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:39.973482  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.044459  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:40.143679  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:09:40.143844  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:40.644217  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.144357  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:09:41.160970  152550 api_server.go:72] duration metric: took 1.017300298s to wait for apiserver process to appear ...
	I0826 12:09:41.161005  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:09:41.161032  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.548928  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.548971  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.548988  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.580924  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:09:43.580991  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:09:43.661191  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:43.667248  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:43.667278  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.161959  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.177173  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.177216  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:44.661798  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:44.668406  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:09:44.668456  152550 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:09:45.162005  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:09:45.168111  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:09:45.174487  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:09:45.174525  152550 api_server.go:131] duration metric: took 4.013513808s to wait for apiserver health ...
	I0826 12:09:45.174536  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:09:45.174543  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:45.176809  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:09:43.982234  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:43.982681  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:43.982714  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:43.982593  154009 retry.go:31] will retry after 2.916473093s: waiting for machine to come up
	I0826 12:09:45.178235  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:09:45.189704  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:09:45.250046  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:09:45.262420  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:09:45.262460  152550 system_pods.go:61] "coredns-6f6b679f8f-h4wmk" [39b276c0-68ef-4dc9-9f73-ee79c2c14625] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262467  152550 system_pods.go:61] "coredns-6f6b679f8f-l5z8f" [7e0082cc-2364-499c-bdb8-5f2ee7ee5fa7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:09:45.262473  152550 system_pods.go:61] "etcd-embed-certs-923586" [06d68f69-a99f-4b34-87c7-e2fb80cdd886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:09:45.262481  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [2d0952e2-f5d9-49e8-b957-00f92dbbc436] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:09:45.262490  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [2e632e39-6249-40e3-82ab-74e820a84f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:09:45.262495  152550 system_pods.go:61] "kube-proxy-wfl6s" [9f690d4f-11ee-4e67-aa8a-2c3e304d699d] Running
	I0826 12:09:45.262500  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [47d66689-0a4c-4811-b4f0-2481034f1684] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:09:45.262505  152550 system_pods.go:61] "metrics-server-6867b74b74-cw5t8" [1bced435-db48-46d6-9c76-fb13050a7851] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:09:45.262510  152550 system_pods.go:61] "storage-provisioner" [259f7851-96da-42c3-aae3-35d13ec21573] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:09:45.262522  152550 system_pods.go:74] duration metric: took 12.449002ms to wait for pod list to return data ...
	I0826 12:09:45.262531  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:09:45.276323  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:09:45.276359  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:09:45.276372  152550 node_conditions.go:105] duration metric: took 13.836307ms to run NodePressure ...
	I0826 12:09:45.276389  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:45.558970  152550 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563147  152550 kubeadm.go:739] kubelet initialised
	I0826 12:09:45.563168  152550 kubeadm.go:740] duration metric: took 4.16477ms waiting for restarted kubelet to initialise ...
	I0826 12:09:45.563176  152550 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:09:45.574933  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.581504  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581530  152550 pod_ready.go:82] duration metric: took 6.568456ms for pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.581548  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-h4wmk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.581557  152550 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.587904  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587935  152550 pod_ready.go:82] duration metric: took 6.368664ms for pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.587945  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "coredns-6f6b679f8f-l5z8f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.587956  152550 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.592416  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592440  152550 pod_ready.go:82] duration metric: took 4.475923ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.592448  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "etcd-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.592453  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:45.654230  152550 pod_ready.go:98] node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654265  152550 pod_ready.go:82] duration metric: took 61.80344ms for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	E0826 12:09:45.654275  152550 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-923586" hosting pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-923586" has status "Ready":"False"
	I0826 12:09:45.654282  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:47.659899  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:46.902687  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:46.903209  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | unable to find current IP address of domain old-k8s-version-839656 in network mk-old-k8s-version-839656
	I0826 12:09:46.903243  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | I0826 12:09:46.903150  154009 retry.go:31] will retry after 4.06528556s: waiting for machine to come up
	I0826 12:09:50.972745  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973257  152982 main.go:141] libmachine: (old-k8s-version-839656) Found IP for machine: 192.168.72.136
	I0826 12:09:50.973280  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserving static IP address...
	I0826 12:09:50.973297  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has current primary IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.973616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.973653  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | skip adding static IP to network mk-old-k8s-version-839656 - found existing host DHCP lease matching {name: "old-k8s-version-839656", mac: "52:54:00:c2:da:28", ip: "192.168.72.136"}
	I0826 12:09:50.973672  152982 main.go:141] libmachine: (old-k8s-version-839656) Reserved static IP address: 192.168.72.136
	I0826 12:09:50.973693  152982 main.go:141] libmachine: (old-k8s-version-839656) Waiting for SSH to be available...
	I0826 12:09:50.973737  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Getting to WaitForSSH function...
	I0826 12:09:50.976028  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976406  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:50.976438  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:50.976544  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH client type: external
	I0826 12:09:50.976598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa (-rw-------)
	I0826 12:09:50.976622  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:09:50.976632  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | About to run SSH command:
	I0826 12:09:50.976642  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | exit 0
	I0826 12:09:51.107476  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | SSH cmd err, output: <nil>: 
	I0826 12:09:51.107964  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetConfigRaw
	I0826 12:09:51.108748  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.111740  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112251  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.112281  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.112613  152982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/config.json ...
	I0826 12:09:51.112820  152982 machine.go:93] provisionDockerMachine start ...
	I0826 12:09:51.112842  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.113094  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.115616  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116011  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.116042  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.116213  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.116382  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116483  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.116618  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.116815  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.117105  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.117120  152982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:09:51.219189  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:09:51.219220  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219528  152982 buildroot.go:166] provisioning hostname "old-k8s-version-839656"
	I0826 12:09:51.219558  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.219798  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.222773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223300  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.223337  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.223511  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.223750  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.223975  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.224156  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.224364  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.224610  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.224625  152982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-839656 && echo "old-k8s-version-839656" | sudo tee /etc/hostname
	I0826 12:09:51.340951  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-839656
	
	I0826 12:09:51.340995  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.343773  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344119  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.344144  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.344312  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.344531  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344731  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.344865  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.345037  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.345207  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.345224  152982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-839656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-839656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-839656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:09:51.456135  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:09:51.456180  152982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:09:51.456233  152982 buildroot.go:174] setting up certificates
	I0826 12:09:51.456247  152982 provision.go:84] configureAuth start
	I0826 12:09:51.456263  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetMachineName
	I0826 12:09:51.456585  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:51.459426  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.459852  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.459895  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.460083  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.462404  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462754  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.462788  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.462984  152982 provision.go:143] copyHostCerts
	I0826 12:09:51.463042  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:09:51.463061  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:09:51.463118  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:09:51.463225  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:09:51.463235  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:09:51.463255  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:09:51.463306  152982 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:09:51.463313  152982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:09:51.463331  152982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:09:51.463381  152982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-839656 san=[127.0.0.1 192.168.72.136 localhost minikube old-k8s-version-839656]
	I0826 12:09:51.533462  152982 provision.go:177] copyRemoteCerts
	I0826 12:09:51.533528  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:09:51.533556  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.536586  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.536967  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.536991  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.537268  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.537519  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.537729  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.537894  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:51.617503  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:09:51.642966  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0826 12:09:51.669120  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0826 12:09:51.693595  152982 provision.go:87] duration metric: took 237.331736ms to configureAuth
	I0826 12:09:51.693629  152982 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:09:51.693808  152982 config.go:182] Loaded profile config "old-k8s-version-839656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 12:09:51.693895  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.697161  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697508  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.697553  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.697789  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.698042  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698207  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.698394  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.698565  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:51.698798  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:51.698819  152982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:09:52.187972  153366 start.go:364] duration metric: took 2m56.271360342s to acquireMachinesLock for "default-k8s-diff-port-697869"
	I0826 12:09:52.188045  153366 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:09:52.188053  153366 fix.go:54] fixHost starting: 
	I0826 12:09:52.188497  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:09:52.188541  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:09:52.209451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0826 12:09:52.209960  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:09:52.210572  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:09:52.210591  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:09:52.211008  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:09:52.211232  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:09:52.211382  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:09:52.213165  153366 fix.go:112] recreateIfNeeded on default-k8s-diff-port-697869: state=Stopped err=<nil>
	I0826 12:09:52.213198  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	W0826 12:09:52.213359  153366 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:09:52.215535  153366 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-697869" ...
	I0826 12:09:49.662002  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.663287  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:51.959544  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:09:51.959580  152982 machine.go:96] duration metric: took 846.74482ms to provisionDockerMachine
	I0826 12:09:51.959595  152982 start.go:293] postStartSetup for "old-k8s-version-839656" (driver="kvm2")
	I0826 12:09:51.959606  152982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:09:51.959628  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:51.959989  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:09:51.960024  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:51.962912  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963278  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:51.963304  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:51.963520  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:51.963756  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:51.963954  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:51.964082  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.046059  152982 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:09:52.050013  152982 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:09:52.050045  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:09:52.050119  152982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:09:52.050225  152982 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:09:52.050345  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:09:52.059871  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:52.082494  152982 start.go:296] duration metric: took 122.880191ms for postStartSetup
	I0826 12:09:52.082546  152982 fix.go:56] duration metric: took 19.890844987s for fixHost
	I0826 12:09:52.082576  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.085291  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085726  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.085772  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.085898  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.086116  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086307  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.086457  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.086659  152982 main.go:141] libmachine: Using SSH client type: native
	I0826 12:09:52.086841  152982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0826 12:09:52.086856  152982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:09:52.187806  152982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674192.159623045
	
	I0826 12:09:52.187839  152982 fix.go:216] guest clock: 1724674192.159623045
	I0826 12:09:52.187846  152982 fix.go:229] Guest: 2024-08-26 12:09:52.159623045 +0000 UTC Remote: 2024-08-26 12:09:52.082552402 +0000 UTC m=+250.413281630 (delta=77.070643ms)
	I0826 12:09:52.187868  152982 fix.go:200] guest clock delta is within tolerance: 77.070643ms
	I0826 12:09:52.187873  152982 start.go:83] releasing machines lock for "old-k8s-version-839656", held for 19.996211523s
	I0826 12:09:52.187905  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.188210  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:52.191003  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191480  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.191511  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.191670  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192375  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192595  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .DriverName
	I0826 12:09:52.192733  152982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:09:52.192794  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.192854  152982 ssh_runner.go:195] Run: cat /version.json
	I0826 12:09:52.192883  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHHostname
	I0826 12:09:52.195598  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195757  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.195965  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.195994  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196172  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196256  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:52.196290  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:52.196424  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHPort
	I0826 12:09:52.196463  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196624  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196627  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHKeyPath
	I0826 12:09:52.196812  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetSSHUsername
	I0826 12:09:52.196842  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.196954  152982 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/old-k8s-version-839656/id_rsa Username:docker}
	I0826 12:09:52.304741  152982 ssh_runner.go:195] Run: systemctl --version
	I0826 12:09:52.311072  152982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:09:52.457568  152982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:09:52.465123  152982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:09:52.465211  152982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:09:52.487320  152982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:09:52.487351  152982 start.go:495] detecting cgroup driver to use...
	I0826 12:09:52.487459  152982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:09:52.509680  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:09:52.526517  152982 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:09:52.526615  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:09:52.540741  152982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:09:52.554819  152982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:09:52.677611  152982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:09:52.829605  152982 docker.go:233] disabling docker service ...
	I0826 12:09:52.829706  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:09:52.844862  152982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:09:52.859869  152982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:09:53.021968  152982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:09:53.156768  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:09:53.173028  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:09:53.194573  152982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0826 12:09:53.194641  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.204783  152982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:09:53.204873  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.215395  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.225578  152982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:09:53.235810  152982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:09:53.246635  152982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:09:53.257399  152982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:09:53.257467  152982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:09:53.273553  152982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:09:53.283339  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:53.432394  152982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:09:53.583340  152982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:09:53.583443  152982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:09:53.590729  152982 start.go:563] Will wait 60s for crictl version
	I0826 12:09:53.590877  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:53.596292  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:09:53.656413  152982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:09:53.656523  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.685569  152982 ssh_runner.go:195] Run: crio --version
	I0826 12:09:53.716571  152982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0826 12:09:52.217358  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Start
	I0826 12:09:52.217561  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring networks are active...
	I0826 12:09:52.218523  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network default is active
	I0826 12:09:52.218930  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Ensuring network mk-default-k8s-diff-port-697869 is active
	I0826 12:09:52.219443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Getting domain xml...
	I0826 12:09:52.220240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Creating domain...
	I0826 12:09:53.637205  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting to get IP...
	I0826 12:09:53.638259  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638719  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.638757  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.638648  154153 retry.go:31] will retry after 309.073725ms: waiting for machine to come up
	I0826 12:09:53.949323  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.949986  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:53.950021  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:53.949941  154153 retry.go:31] will retry after 389.554302ms: waiting for machine to come up
	I0826 12:09:54.341836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342416  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.342458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.342359  154153 retry.go:31] will retry after 314.065385ms: waiting for machine to come up
	I0826 12:09:54.657763  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658394  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:54.658425  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:54.658336  154153 retry.go:31] will retry after 564.24487ms: waiting for machine to come up
	I0826 12:09:55.224230  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224610  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.224664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.224578  154153 retry.go:31] will retry after 685.123739ms: waiting for machine to come up
	I0826 12:09:53.718104  152982 main.go:141] libmachine: (old-k8s-version-839656) Calling .GetIP
	I0826 12:09:53.721461  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.721900  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:da:28", ip: ""} in network mk-old-k8s-version-839656: {Iface:virbr4 ExpiryTime:2024-08-26 13:09:43 +0000 UTC Type:0 Mac:52:54:00:c2:da:28 Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:old-k8s-version-839656 Clientid:01:52:54:00:c2:da:28}
	I0826 12:09:53.721938  152982 main.go:141] libmachine: (old-k8s-version-839656) DBG | domain old-k8s-version-839656 has defined IP address 192.168.72.136 and MAC address 52:54:00:c2:da:28 in network mk-old-k8s-version-839656
	I0826 12:09:53.722137  152982 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0826 12:09:53.726404  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:53.738999  152982 kubeadm.go:883] updating cluster {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:09:53.739130  152982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 12:09:53.739182  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:53.791456  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:53.791561  152982 ssh_runner.go:195] Run: which lz4
	I0826 12:09:53.795624  152982 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:09:53.799857  152982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:09:53.799892  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0826 12:09:55.402637  152982 crio.go:462] duration metric: took 1.607044522s to copy over tarball
	I0826 12:09:55.402746  152982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:09:54.163063  152550 pod_ready.go:103] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.662394  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.662428  152550 pod_ready.go:82] duration metric: took 10.008136426s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.662445  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668522  152550 pod_ready.go:93] pod "kube-proxy-wfl6s" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:55.668557  152550 pod_ready.go:82] duration metric: took 6.10318ms for pod "kube-proxy-wfl6s" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:55.668571  152550 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:57.675036  152550 pod_ready.go:103] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:09:55.911914  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912458  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:55.912484  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:55.912420  154153 retry.go:31] will retry after 578.675355ms: waiting for machine to come up
	I0826 12:09:56.493183  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:56.493668  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:56.493552  154153 retry.go:31] will retry after 793.710444ms: waiting for machine to come up
	I0826 12:09:57.289554  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290128  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:57.290160  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:57.290070  154153 retry.go:31] will retry after 1.099676217s: waiting for machine to come up
	I0826 12:09:58.391500  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392029  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:09:58.392060  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:09:58.391966  154153 retry.go:31] will retry after 1.753296062s: waiting for machine to come up
	I0826 12:10:00.148179  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148759  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:00.148795  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:00.148689  154153 retry.go:31] will retry after 1.591840738s: waiting for machine to come up
	I0826 12:09:58.462705  152982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059925579s)
	I0826 12:09:58.462738  152982 crio.go:469] duration metric: took 3.060066141s to extract the tarball
	I0826 12:09:58.462748  152982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:09:58.504763  152982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:09:58.547876  152982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0826 12:09:58.547908  152982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:09:58.548002  152982 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.548020  152982 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.548047  152982 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.548058  152982 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.548025  152982 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.548107  152982 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.548041  152982 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0826 12:09:58.548004  152982 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550035  152982 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.550050  152982 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.550064  152982 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:58.550039  152982 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0826 12:09:58.550090  152982 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.550045  152982 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.550125  152982 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.550071  152982 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.785285  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.798866  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0826 12:09:58.801333  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.803488  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.845454  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.845683  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.851257  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.875512  152982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0826 12:09:58.875632  152982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.875702  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.899151  152982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0826 12:09:58.899204  152982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0826 12:09:58.899268  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.947547  152982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0826 12:09:58.947602  152982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.947657  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.960126  152982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0826 12:09:58.960178  152982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.960229  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.978450  152982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0826 12:09:58.978504  152982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:58.978571  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.981296  152982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0826 12:09:58.981335  152982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.981378  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990296  152982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0826 12:09:58.990341  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:58.990351  152982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:58.990398  152982 ssh_runner.go:195] Run: which crictl
	I0826 12:09:58.990481  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:58.990549  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:58.990624  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:58.993238  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:58.993297  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.117393  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.117394  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.137340  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.137381  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.137396  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.139282  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.140553  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.237314  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.242110  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0826 12:09:59.293209  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0826 12:09:59.293288  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0826 12:09:59.310442  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0826 12:09:59.316239  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0826 12:09:59.316345  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0826 12:09:59.382180  152982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0826 12:09:59.382851  152982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:09:59.389447  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0826 12:09:59.454424  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0826 12:09:59.484709  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0826 12:09:59.491496  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0826 12:09:59.491517  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0826 12:09:59.491555  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0826 12:09:59.495411  152982 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0826 12:09:59.614016  152982 cache_images.go:92] duration metric: took 1.066082637s to LoadCachedImages
	W0826 12:09:59.614118  152982 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0826 12:09:59.614133  152982 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.20.0 crio true true} ...
	I0826 12:09:59.614248  152982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-839656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:09:59.614345  152982 ssh_runner.go:195] Run: crio config
	I0826 12:09:59.661938  152982 cni.go:84] Creating CNI manager for ""
	I0826 12:09:59.661962  152982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:09:59.661975  152982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:09:59.661994  152982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-839656 NodeName:old-k8s-version-839656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0826 12:09:59.662131  152982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-839656"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:09:59.662212  152982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0826 12:09:59.672820  152982 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:09:59.672907  152982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:09:59.682949  152982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0826 12:09:59.701705  152982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:09:59.719839  152982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0826 12:09:59.737712  152982 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0826 12:09:59.741301  152982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:09:59.753857  152982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:09:59.877969  152982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:09:59.896278  152982 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656 for IP: 192.168.72.136
	I0826 12:09:59.896306  152982 certs.go:194] generating shared ca certs ...
	I0826 12:09:59.896337  152982 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:09:59.896522  152982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:09:59.896620  152982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:09:59.896640  152982 certs.go:256] generating profile certs ...
	I0826 12:09:59.896769  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.key
	I0826 12:09:59.896903  152982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key.bc731261
	I0826 12:09:59.896972  152982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key
	I0826 12:09:59.897126  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:09:59.897165  152982 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:09:59.897178  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:09:59.897216  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:09:59.897261  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:09:59.897303  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:09:59.897362  152982 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:09:59.898051  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:09:59.938407  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:09:59.983455  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:00.021803  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:00.058157  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0826 12:10:00.095920  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:00.133185  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:00.167537  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:00.193940  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:00.220558  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:00.245567  152982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:00.274758  152982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:00.296741  152982 ssh_runner.go:195] Run: openssl version
	I0826 12:10:00.305185  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:00.321395  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326339  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.326422  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:00.332789  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:00.343971  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:00.355979  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360900  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.360985  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:00.367085  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:00.379942  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:00.391907  152982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396769  152982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.396845  152982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:00.403009  152982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:00.416262  152982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:00.422105  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:00.428526  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:00.435241  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:00.441902  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:00.448502  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:00.455012  152982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:00.461390  152982 kubeadm.go:392] StartCluster: {Name:old-k8s-version-839656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-839656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:00.461533  152982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:00.461596  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.503939  152982 cri.go:89] found id: ""
	I0826 12:10:00.504026  152982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:00.515410  152982 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:00.515434  152982 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:00.515483  152982 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:00.527240  152982 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:00.528558  152982 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-839656" does not appear in /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:10:00.529540  152982 kubeconfig.go:62] /home/jenkins/minikube-integration/19501-99403/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-839656" cluster setting kubeconfig missing "old-k8s-version-839656" context setting]
	I0826 12:10:00.530977  152982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:00.618477  152982 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:00.630233  152982 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
	I0826 12:10:00.630283  152982 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:00.630300  152982 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:00.630367  152982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:00.667438  152982 cri.go:89] found id: ""
	I0826 12:10:00.667535  152982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:00.685319  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:00.695968  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:00.696003  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:00.696087  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:00.706519  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:00.706583  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:00.716807  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:00.726555  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:00.726637  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:00.737356  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.747702  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:00.747773  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:00.758771  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:00.769257  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:00.769345  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:00.780102  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:00.791976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:00.922432  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:09:58.196998  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:09:58.197024  152550 pod_ready.go:82] duration metric: took 2.528445128s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:09:58.197035  152550 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:00.486854  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:02.704500  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:01.741774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742399  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:01.742443  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:01.742299  154153 retry.go:31] will retry after 2.754846919s: waiting for machine to come up
	I0826 12:10:04.499575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499918  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:04.499950  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:04.499866  154153 retry.go:31] will retry after 2.260097113s: waiting for machine to come up
	I0826 12:10:02.146027  152982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223548629s)
	I0826 12:10:02.146087  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.407469  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.511616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:02.629123  152982 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:02.629250  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.129448  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:03.629685  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.129759  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:04.629807  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.129526  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.629782  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.129949  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:06.630031  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:05.203846  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:07.703046  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:06.761311  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | unable to find current IP address of domain default-k8s-diff-port-697869 in network mk-default-k8s-diff-port-697869
	I0826 12:10:06.761805  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | I0826 12:10:06.761731  154153 retry.go:31] will retry after 3.424580644s: waiting for machine to come up
	I0826 12:10:10.188178  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188746  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has current primary IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.188774  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Found IP for machine: 192.168.61.11
	I0826 12:10:10.188789  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserving static IP address...
	I0826 12:10:10.189233  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.189270  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | skip adding static IP to network mk-default-k8s-diff-port-697869 - found existing host DHCP lease matching {name: "default-k8s-diff-port-697869", mac: "52:54:00:87:9b:a7", ip: "192.168.61.11"}
	I0826 12:10:10.189292  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Reserved static IP address: 192.168.61.11
	I0826 12:10:10.189312  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Waiting for SSH to be available...
	I0826 12:10:10.189327  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Getting to WaitForSSH function...
	I0826 12:10:10.191775  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192162  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.192192  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.192272  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH client type: external
	I0826 12:10:10.192300  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa (-rw-------)
	I0826 12:10:10.192332  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:10.192351  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | About to run SSH command:
	I0826 12:10:10.192364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | exit 0
	I0826 12:10:10.315078  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:10.315506  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetConfigRaw
	I0826 12:10:10.316191  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.318850  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319207  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.319235  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.319491  153366 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/config.json ...
	I0826 12:10:10.319715  153366 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:10.319736  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:10.320045  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.322352  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322660  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.322682  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.322852  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.323067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323216  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.323371  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.323524  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.323732  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.323745  153366 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:10.427284  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:10.427314  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427630  153366 buildroot.go:166] provisioning hostname "default-k8s-diff-port-697869"
	I0826 12:10:10.427661  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.427836  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.430485  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.430865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.430894  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.431065  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.431240  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431388  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.431507  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.431631  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.431804  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.431818  153366 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-697869 && echo "default-k8s-diff-port-697869" | sudo tee /etc/hostname
	I0826 12:10:10.544414  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-697869
	
	I0826 12:10:10.544455  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.547901  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548333  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.548375  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.548612  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.548835  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549074  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.549250  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.549458  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.549632  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.549648  153366 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-697869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-697869/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-697869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:10.659809  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:10.659858  153366 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:10.659937  153366 buildroot.go:174] setting up certificates
	I0826 12:10:10.659957  153366 provision.go:84] configureAuth start
	I0826 12:10:10.659978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetMachineName
	I0826 12:10:10.660304  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:10.663231  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.663628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.663798  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.666261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666603  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.666630  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.666827  153366 provision.go:143] copyHostCerts
	I0826 12:10:10.666912  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:10.666937  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:10.667005  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:10.667125  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:10.667137  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:10.667164  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:10.667239  153366 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:10.667249  153366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:10.667273  153366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:10.667344  153366 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-697869 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-697869 localhost minikube]
	I0826 12:10:11.491531  152463 start.go:364] duration metric: took 54.190046907s to acquireMachinesLock for "no-preload-956479"
	I0826 12:10:11.491592  152463 start.go:96] Skipping create...Using existing machine configuration
	I0826 12:10:11.491601  152463 fix.go:54] fixHost starting: 
	I0826 12:10:11.492032  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:10:11.492066  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:10:11.509260  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
	I0826 12:10:11.509870  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:10:11.510401  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:10:11.510433  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:10:11.510772  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:10:11.510983  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:11.511151  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:10:11.513024  152463 fix.go:112] recreateIfNeeded on no-preload-956479: state=Stopped err=<nil>
	I0826 12:10:11.513048  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	W0826 12:10:11.513218  152463 fix.go:138] unexpected machine state, will restart: <nil>
	I0826 12:10:11.515241  152463 out.go:177] * Restarting existing kvm2 VM for "no-preload-956479" ...
	I0826 12:10:07.129729  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:07.629445  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.129308  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:08.629701  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.130082  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.629958  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.129963  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:10.629747  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.130061  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:11.630060  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:09.703400  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:11.703487  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:10.808804  153366 provision.go:177] copyRemoteCerts
	I0826 12:10:10.808865  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:10.808893  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.811758  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812215  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.812251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.812451  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.812664  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.812817  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.813020  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:10.905741  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:10.931863  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0826 12:10:10.958232  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:10.983737  153366 provision.go:87] duration metric: took 323.761817ms to configureAuth
	I0826 12:10:10.983774  153366 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:10.983992  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:10.984092  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:10.986976  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987357  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:10.987386  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:10.987628  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:10.987842  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.987978  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:10.988105  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:10.988276  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:10.988443  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:10.988459  153366 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:11.257812  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:11.257846  153366 machine.go:96] duration metric: took 938.116965ms to provisionDockerMachine
	I0826 12:10:11.257861  153366 start.go:293] postStartSetup for "default-k8s-diff-port-697869" (driver="kvm2")
	I0826 12:10:11.257872  153366 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:11.257889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.258214  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:11.258246  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.261404  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261680  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.261702  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.261886  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.262067  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.262214  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.262386  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.345667  153366 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:11.349967  153366 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:11.350004  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:11.350084  153366 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:11.350186  153366 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:11.350308  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:11.361671  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:11.386178  153366 start.go:296] duration metric: took 128.298803ms for postStartSetup
	I0826 12:10:11.386233  153366 fix.go:56] duration metric: took 19.198180603s for fixHost
	I0826 12:10:11.386258  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.389263  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389579  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.389606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.389838  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.390034  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390172  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.390287  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.390479  153366 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:11.390666  153366 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0826 12:10:11.390678  153366 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:11.491363  153366 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674211.462689704
	
	I0826 12:10:11.491389  153366 fix.go:216] guest clock: 1724674211.462689704
	I0826 12:10:11.491401  153366 fix.go:229] Guest: 2024-08-26 12:10:11.462689704 +0000 UTC Remote: 2024-08-26 12:10:11.386238136 +0000 UTC m=+195.618286719 (delta=76.451568ms)
	I0826 12:10:11.491428  153366 fix.go:200] guest clock delta is within tolerance: 76.451568ms
	I0826 12:10:11.491433  153366 start.go:83] releasing machines lock for "default-k8s-diff-port-697869", held for 19.303413047s
	I0826 12:10:11.491459  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.491760  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:11.494596  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495094  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.495124  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.495330  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.495889  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496208  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:10:11.496333  153366 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:11.496390  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.496433  153366 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:11.496456  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:10:11.499087  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499251  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499442  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499469  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499705  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:11.499728  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:11.499751  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.499964  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500007  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:10:11.500134  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:10:11.500164  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500359  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:10:11.500349  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.500509  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:10:11.612518  153366 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:11.618693  153366 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:11.766025  153366 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:11.772405  153366 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:11.772476  153366 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:11.790401  153366 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:11.790433  153366 start.go:495] detecting cgroup driver to use...
	I0826 12:10:11.790505  153366 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:11.806946  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:11.822137  153366 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:11.822199  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:11.836496  153366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:11.851090  153366 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:11.963366  153366 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:12.113326  153366 docker.go:233] disabling docker service ...
	I0826 12:10:12.113402  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:12.131489  153366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:12.148801  153366 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:12.293074  153366 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:12.420202  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:12.435061  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:12.455192  153366 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:12.455268  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.467004  153366 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:12.467079  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.477903  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.488979  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.500322  153366 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:12.513490  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.525746  153366 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.544944  153366 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:12.556159  153366 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:12.566333  153366 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:12.566420  153366 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:12.584702  153366 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:12.596221  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:12.740368  153366 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:12.882412  153366 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:12.882501  153366 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:12.888373  153366 start.go:563] Will wait 60s for crictl version
	I0826 12:10:12.888446  153366 ssh_runner.go:195] Run: which crictl
	I0826 12:10:12.892415  153366 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:12.930486  153366 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:12.930577  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.959322  153366 ssh_runner.go:195] Run: crio --version
	I0826 12:10:12.997340  153366 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:11.516801  152463 main.go:141] libmachine: (no-preload-956479) Calling .Start
	I0826 12:10:11.517026  152463 main.go:141] libmachine: (no-preload-956479) Ensuring networks are active...
	I0826 12:10:11.517932  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network default is active
	I0826 12:10:11.518378  152463 main.go:141] libmachine: (no-preload-956479) Ensuring network mk-no-preload-956479 is active
	I0826 12:10:11.518950  152463 main.go:141] libmachine: (no-preload-956479) Getting domain xml...
	I0826 12:10:11.519889  152463 main.go:141] libmachine: (no-preload-956479) Creating domain...
	I0826 12:10:12.859267  152463 main.go:141] libmachine: (no-preload-956479) Waiting to get IP...
	I0826 12:10:12.860407  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:12.860889  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:12.860933  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:12.860840  154342 retry.go:31] will retry after 295.429691ms: waiting for machine to come up
	I0826 12:10:13.158650  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.159259  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.159290  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.159207  154342 retry.go:31] will retry after 385.646499ms: waiting for machine to come up
	I0826 12:10:13.547162  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.547722  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.547754  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.547631  154342 retry.go:31] will retry after 390.965905ms: waiting for machine to come up
	I0826 12:10:13.940240  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:13.940777  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:13.940820  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:13.940714  154342 retry.go:31] will retry after 457.995779ms: waiting for machine to come up
	I0826 12:10:14.400465  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:14.400981  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:14.401016  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:14.400917  154342 retry.go:31] will retry after 697.078299ms: waiting for machine to come up
	I0826 12:10:12.998786  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetIP
	I0826 12:10:13.001919  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002340  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:10:13.002376  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:10:13.002627  153366 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:13.007888  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:13.023470  153366 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:13.023599  153366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:13.023666  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:13.060321  153366 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:13.060405  153366 ssh_runner.go:195] Run: which lz4
	I0826 12:10:13.064638  153366 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0826 12:10:13.069089  153366 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0826 12:10:13.069126  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0826 12:10:14.437617  153366 crio.go:462] duration metric: took 1.373030307s to copy over tarball
	I0826 12:10:14.437710  153366 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0826 12:10:12.129652  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:12.630076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.129342  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.630081  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.130129  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:14.629381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.129909  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:15.630114  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.129784  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:16.629463  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:13.704867  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:16.204819  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:15.099404  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:15.100002  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:15.100035  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:15.099956  154342 retry.go:31] will retry after 947.348263ms: waiting for machine to come up
	I0826 12:10:16.048628  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:16.049166  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:16.049185  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:16.049113  154342 retry.go:31] will retry after 1.169467339s: waiting for machine to come up
	I0826 12:10:17.219998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:17.220564  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:17.220589  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:17.220541  154342 retry.go:31] will retry after 945.873541ms: waiting for machine to come up
	I0826 12:10:18.167823  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:18.168351  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:18.168377  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:18.168272  154342 retry.go:31] will retry after 1.495556294s: waiting for machine to come up
	I0826 12:10:19.666032  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:19.666629  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:19.666656  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:19.666551  154342 retry.go:31] will retry after 1.710448725s: waiting for machine to come up
	I0826 12:10:16.739676  153366 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301910814s)
	I0826 12:10:16.739718  153366 crio.go:469] duration metric: took 2.302064986s to extract the tarball
	I0826 12:10:16.739729  153366 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0826 12:10:16.777127  153366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:16.820340  153366 crio.go:514] all images are preloaded for cri-o runtime.
	I0826 12:10:16.820367  153366 cache_images.go:84] Images are preloaded, skipping loading
	I0826 12:10:16.820376  153366 kubeadm.go:934] updating node { 192.168.61.11 8444 v1.31.0 crio true true} ...
	I0826 12:10:16.820500  153366 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-697869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:16.820619  153366 ssh_runner.go:195] Run: crio config
	I0826 12:10:16.868670  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:16.868694  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:16.868708  153366 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:16.868738  153366 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-697869 NodeName:default-k8s-diff-port-697869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:16.868915  153366 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-697869"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:16.869010  153366 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:16.883092  153366 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:16.883230  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:16.893951  153366 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0826 12:10:16.911836  153366 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:16.928582  153366 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0826 12:10:16.945593  153366 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:16.949432  153366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:16.961659  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:17.085246  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:17.103244  153366 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869 for IP: 192.168.61.11
	I0826 12:10:17.103271  153366 certs.go:194] generating shared ca certs ...
	I0826 12:10:17.103302  153366 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:17.103510  153366 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:17.103575  153366 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:17.103585  153366 certs.go:256] generating profile certs ...
	I0826 12:10:17.103700  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/client.key
	I0826 12:10:17.103787  153366 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key.bfd30dfa
	I0826 12:10:17.103839  153366 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key
	I0826 12:10:17.103989  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:17.104033  153366 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:17.104045  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:17.104088  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:17.104138  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:17.104169  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:17.104226  153366 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:17.105131  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:17.133445  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:17.170369  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:17.203828  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:17.239736  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0826 12:10:17.270804  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0826 12:10:17.311143  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:17.337241  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0826 12:10:17.361255  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:17.389089  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:17.415203  153366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:17.440069  153366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:17.457711  153366 ssh_runner.go:195] Run: openssl version
	I0826 12:10:17.463825  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:17.475007  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479590  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.479674  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:17.485682  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:17.496820  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:17.507770  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512284  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.512360  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:17.518185  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:17.530028  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:17.541213  153366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546412  153366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.546492  153366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:17.552969  153366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:17.565000  153366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:17.570123  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:17.576431  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:17.582447  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:17.588686  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:17.595338  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:17.601487  153366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:17.607923  153366 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-697869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-697869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:17.608035  153366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:17.608125  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.647040  153366 cri.go:89] found id: ""
	I0826 12:10:17.647140  153366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:17.657597  153366 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:17.657623  153366 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:17.657696  153366 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:17.667949  153366 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:17.669056  153366 kubeconfig.go:125] found "default-k8s-diff-port-697869" server: "https://192.168.61.11:8444"
	I0826 12:10:17.671281  153366 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:17.680798  153366 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I0826 12:10:17.680847  153366 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:17.680862  153366 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:17.680921  153366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:17.718772  153366 cri.go:89] found id: ""
	I0826 12:10:17.718890  153366 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:17.737115  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:17.747272  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:17.747300  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:17.747365  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:10:17.757172  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:17.757253  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:17.767325  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:10:17.779947  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:17.780022  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:17.789867  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.799532  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:17.799614  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:17.812714  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:10:17.825162  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:17.825246  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:17.838058  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:17.855348  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:17.976993  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:18.821196  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.025876  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.104571  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:19.198607  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:19.198729  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.698978  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.198987  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.246044  153366 api_server.go:72] duration metric: took 1.047451922s to wait for apiserver process to appear ...
	I0826 12:10:20.246077  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:20.246102  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:20.246682  153366 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0826 12:10:20.747149  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:17.129856  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:17.629845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.129411  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.629780  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.129980  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:19.629521  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.129719  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:20.630349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.130078  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:21.629658  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:18.704382  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:20.705290  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:22.705625  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:21.379594  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:21.380141  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:21.380174  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:21.380054  154342 retry.go:31] will retry after 2.588125482s: waiting for machine to come up
	I0826 12:10:23.969901  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:23.970463  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:23.970492  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:23.970429  154342 retry.go:31] will retry after 2.959609618s: waiting for machine to come up
	I0826 12:10:22.736733  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.736773  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.736792  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.767927  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:22.767978  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:22.767998  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:22.815605  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:22.815647  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.247226  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.265036  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.265070  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:23.746536  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:23.761050  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:23.761087  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.246584  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.256796  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.256832  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:24.746370  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:24.751618  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:24.751659  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.246161  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.250242  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.250271  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:25.746903  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:25.751494  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:25.751522  153366 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:26.246579  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:10:26.251290  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:10:26.257484  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:26.257519  153366 api_server.go:131] duration metric: took 6.01143401s to wait for apiserver health ...
	I0826 12:10:26.257529  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:10:26.257536  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:26.259498  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:22.130431  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:22.630197  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.129672  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:23.630044  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.129562  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:24.629554  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.129334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.630351  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.130136  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:26.629461  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:25.203975  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.704731  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:26.932057  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:26.932632  152463 main.go:141] libmachine: (no-preload-956479) DBG | unable to find current IP address of domain no-preload-956479 in network mk-no-preload-956479
	I0826 12:10:26.932665  152463 main.go:141] libmachine: (no-preload-956479) DBG | I0826 12:10:26.932547  154342 retry.go:31] will retry after 3.538498107s: waiting for machine to come up
	I0826 12:10:26.260852  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:26.271312  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:26.290104  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:26.299800  153366 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:26.299843  153366 system_pods.go:61] "coredns-6f6b679f8f-d5f9l" [7761358c-70cb-40e1-98c2-322335e33053] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:26.299852  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [877bd1a3-67e5-4208-96f7-242f6a6edd76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:26.299858  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [e2d33714-bff0-480b-9619-ed28f0fbbbe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:26.299868  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [f858c23a-d87e-4f1e-bffa-0bdd8ded996f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:26.299872  153366 system_pods.go:61] "kube-proxy-lvsx9" [12112756-81ed-415f-9033-cb9effdd20ee] Running
	I0826 12:10:26.299880  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [d8991013-f5ee-4df3-b48a-d6546417999a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:26.299885  153366 system_pods.go:61] "metrics-server-6867b74b74-spxx8" [1d5d9b1e-05f3-4b59-98a8-8d8f127be3c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:26.299889  153366 system_pods.go:61] "storage-provisioner" [ac2ac441-92f0-467a-a0da-fe4b8e4da50c] Running
	I0826 12:10:26.299896  153366 system_pods.go:74] duration metric: took 9.758032ms to wait for pod list to return data ...
	I0826 12:10:26.299903  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:26.303810  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:26.303848  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:26.303865  153366 node_conditions.go:105] duration metric: took 3.956287ms to run NodePressure ...
	I0826 12:10:26.303888  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:26.568053  153366 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573755  153366 kubeadm.go:739] kubelet initialised
	I0826 12:10:26.573793  153366 kubeadm.go:740] duration metric: took 5.692563ms waiting for restarted kubelet to initialise ...
	I0826 12:10:26.573810  153366 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:26.580178  153366 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:28.585940  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.587027  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:27.129634  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:27.629356  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.130029  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:28.629993  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.130030  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:29.629424  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.129476  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.630209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.129435  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:31.630170  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:30.203373  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:32.204503  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:30.474603  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475145  152463 main.go:141] libmachine: (no-preload-956479) Found IP for machine: 192.168.50.213
	I0826 12:10:30.475172  152463 main.go:141] libmachine: (no-preload-956479) Reserving static IP address...
	I0826 12:10:30.475184  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has current primary IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.475655  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.475688  152463 main.go:141] libmachine: (no-preload-956479) DBG | skip adding static IP to network mk-no-preload-956479 - found existing host DHCP lease matching {name: "no-preload-956479", mac: "52:54:00:dd:57:47", ip: "192.168.50.213"}
	I0826 12:10:30.475705  152463 main.go:141] libmachine: (no-preload-956479) Reserved static IP address: 192.168.50.213
	I0826 12:10:30.475724  152463 main.go:141] libmachine: (no-preload-956479) Waiting for SSH to be available...
	I0826 12:10:30.475749  152463 main.go:141] libmachine: (no-preload-956479) DBG | Getting to WaitForSSH function...
	I0826 12:10:30.477762  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478222  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.478256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.478323  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH client type: external
	I0826 12:10:30.478352  152463 main.go:141] libmachine: (no-preload-956479) DBG | Using SSH private key: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa (-rw-------)
	I0826 12:10:30.478400  152463 main.go:141] libmachine: (no-preload-956479) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0826 12:10:30.478423  152463 main.go:141] libmachine: (no-preload-956479) DBG | About to run SSH command:
	I0826 12:10:30.478431  152463 main.go:141] libmachine: (no-preload-956479) DBG | exit 0
	I0826 12:10:30.607143  152463 main.go:141] libmachine: (no-preload-956479) DBG | SSH cmd err, output: <nil>: 
	I0826 12:10:30.607526  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetConfigRaw
	I0826 12:10:30.608312  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.611028  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611425  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.611461  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.611664  152463 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/config.json ...
	I0826 12:10:30.611888  152463 machine.go:93] provisionDockerMachine start ...
	I0826 12:10:30.611920  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:30.612166  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.614651  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615221  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.615253  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.615430  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.615623  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615802  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.615987  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.616182  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.616357  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.616367  152463 main.go:141] libmachine: About to run SSH command:
	hostname
	I0826 12:10:30.719178  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0826 12:10:30.719220  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719544  152463 buildroot.go:166] provisioning hostname "no-preload-956479"
	I0826 12:10:30.719577  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.719829  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.722665  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723083  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.723112  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.723299  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.723479  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.723805  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.723965  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.724136  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.724154  152463 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-956479 && echo "no-preload-956479" | sudo tee /etc/hostname
	I0826 12:10:30.844510  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-956479
	
	I0826 12:10:30.844551  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.848147  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848601  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.848636  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.848846  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:30.849053  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849234  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:30.849371  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:30.849537  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:30.849711  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:30.849726  152463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956479/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0826 12:10:30.963743  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0826 12:10:30.963781  152463 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19501-99403/.minikube CaCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19501-99403/.minikube}
	I0826 12:10:30.963831  152463 buildroot.go:174] setting up certificates
	I0826 12:10:30.963844  152463 provision.go:84] configureAuth start
	I0826 12:10:30.963858  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetMachineName
	I0826 12:10:30.964223  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:30.967426  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.967922  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.967947  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.968210  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:30.970910  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971231  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:30.971268  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:30.971381  152463 provision.go:143] copyHostCerts
	I0826 12:10:30.971439  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem, removing ...
	I0826 12:10:30.971462  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem
	I0826 12:10:30.971515  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/ca.pem (1078 bytes)
	I0826 12:10:30.971610  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem, removing ...
	I0826 12:10:30.971620  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem
	I0826 12:10:30.971641  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/cert.pem (1123 bytes)
	I0826 12:10:30.971695  152463 exec_runner.go:144] found /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem, removing ...
	I0826 12:10:30.971708  152463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem
	I0826 12:10:30.971726  152463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19501-99403/.minikube/key.pem (1679 bytes)
	I0826 12:10:30.971773  152463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem org=jenkins.no-preload-956479 san=[127.0.0.1 192.168.50.213 localhost minikube no-preload-956479]
	I0826 12:10:31.209813  152463 provision.go:177] copyRemoteCerts
	I0826 12:10:31.209904  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0826 12:10:31.209939  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.213380  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.213880  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.213921  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.214161  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.214387  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.214543  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.214669  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.304972  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0826 12:10:31.332069  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0826 12:10:31.359526  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0826 12:10:31.387988  152463 provision.go:87] duration metric: took 424.128041ms to configureAuth
	I0826 12:10:31.388025  152463 buildroot.go:189] setting minikube options for container-runtime
	I0826 12:10:31.388248  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:10:31.388342  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.392126  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392495  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.392527  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.392770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.393069  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393276  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.393443  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.393636  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.393812  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.393830  152463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0826 12:10:31.673101  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0826 12:10:31.673134  152463 machine.go:96] duration metric: took 1.061231135s to provisionDockerMachine
	I0826 12:10:31.673147  152463 start.go:293] postStartSetup for "no-preload-956479" (driver="kvm2")
	I0826 12:10:31.673157  152463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0826 12:10:31.673173  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.673523  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0826 12:10:31.673556  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.676692  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677097  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.677142  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.677349  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.677558  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.677702  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.677822  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.757940  152463 ssh_runner.go:195] Run: cat /etc/os-release
	I0826 12:10:31.762636  152463 info.go:137] Remote host: Buildroot 2023.02.9
	I0826 12:10:31.762668  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/addons for local assets ...
	I0826 12:10:31.762755  152463 filesync.go:126] Scanning /home/jenkins/minikube-integration/19501-99403/.minikube/files for local assets ...
	I0826 12:10:31.762887  152463 filesync.go:149] local asset: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem -> 1065982.pem in /etc/ssl/certs
	I0826 12:10:31.763005  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0826 12:10:31.773596  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:31.805712  152463 start.go:296] duration metric: took 132.547938ms for postStartSetup
	I0826 12:10:31.805772  152463 fix.go:56] duration metric: took 20.314170869s for fixHost
	I0826 12:10:31.805799  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.809143  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809503  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.809539  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.809770  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.810034  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.810552  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.810714  152463 main.go:141] libmachine: Using SSH client type: native
	I0826 12:10:31.810950  152463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0826 12:10:31.810964  152463 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0826 12:10:31.919562  152463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724674231.878777554
	
	I0826 12:10:31.919593  152463 fix.go:216] guest clock: 1724674231.878777554
	I0826 12:10:31.919605  152463 fix.go:229] Guest: 2024-08-26 12:10:31.878777554 +0000 UTC Remote: 2024-08-26 12:10:31.805776925 +0000 UTC m=+357.093278934 (delta=73.000629ms)
	I0826 12:10:31.919635  152463 fix.go:200] guest clock delta is within tolerance: 73.000629ms
	I0826 12:10:31.919653  152463 start.go:83] releasing machines lock for "no-preload-956479", held for 20.428086051s
	I0826 12:10:31.919683  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.919994  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:31.922926  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923273  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.923305  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.923492  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924019  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924217  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:10:31.924314  152463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0826 12:10:31.924361  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.924462  152463 ssh_runner.go:195] Run: cat /version.json
	I0826 12:10:31.924485  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:10:31.927256  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927510  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927697  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927724  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.927869  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.927977  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:31.927998  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:31.928076  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928245  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:10:31.928265  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928507  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:31.928547  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:10:31.928695  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:10:31.928816  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:10:32.013240  152463 ssh_runner.go:195] Run: systemctl --version
	I0826 12:10:32.047898  152463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0826 12:10:32.200554  152463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0826 12:10:32.207077  152463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0826 12:10:32.207149  152463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0826 12:10:32.223842  152463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0826 12:10:32.223869  152463 start.go:495] detecting cgroup driver to use...
	I0826 12:10:32.223931  152463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0826 12:10:32.241232  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0826 12:10:32.256522  152463 docker.go:217] disabling cri-docker service (if available) ...
	I0826 12:10:32.256594  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0826 12:10:32.271203  152463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0826 12:10:32.286062  152463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0826 12:10:32.422959  152463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0826 12:10:32.596450  152463 docker.go:233] disabling docker service ...
	I0826 12:10:32.596518  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0826 12:10:32.610684  152463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0826 12:10:32.624456  152463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0826 12:10:32.754300  152463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0826 12:10:32.880447  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0826 12:10:32.895761  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0826 12:10:32.915507  152463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0826 12:10:32.915579  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.926244  152463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0826 12:10:32.926323  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.936322  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.947292  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.958349  152463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0826 12:10:32.969332  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:32.981643  152463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.003757  152463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0826 12:10:33.014520  152463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0826 12:10:33.024134  152463 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0826 12:10:33.024220  152463 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0826 12:10:33.036667  152463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0826 12:10:33.046675  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:33.166681  152463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0826 12:10:33.314047  152463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0826 12:10:33.314136  152463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0826 12:10:33.319922  152463 start.go:563] Will wait 60s for crictl version
	I0826 12:10:33.320002  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.323747  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0826 12:10:33.363172  152463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0826 12:10:33.363268  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.391607  152463 ssh_runner.go:195] Run: crio --version
	I0826 12:10:33.422180  152463 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0826 12:10:33.423515  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetIP
	I0826 12:10:33.426749  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427279  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:10:33.427316  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:10:33.427559  152463 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0826 12:10:33.431826  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:33.443984  152463 kubeadm.go:883] updating cluster {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0826 12:10:33.444119  152463 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 12:10:33.444160  152463 ssh_runner.go:195] Run: sudo crictl images --output json
	I0826 12:10:33.478886  152463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0826 12:10:33.478919  152463 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0826 12:10:33.478977  152463 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.478997  152463 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.479029  152463 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.479079  152463 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0826 12:10:33.479002  152463 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.479095  152463 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.479153  152463 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.479157  152463 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480618  152463 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.480616  152463 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.480650  152463 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.480654  152463 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.480623  152463 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.480628  152463 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:33.480629  152463 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.480763  152463 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0826 12:10:33.713473  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0826 12:10:33.725267  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.737490  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.787737  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.801836  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.807734  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.873480  152463 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0826 12:10:33.873546  152463 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.873617  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.873493  152463 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0826 12:10:33.873741  152463 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.873772  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.889641  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.921098  152463 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0826 12:10:33.921226  152463 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.921326  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.921170  152463 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0826 12:10:33.921463  152463 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.921499  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.930650  152463 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0826 12:10:33.930702  152463 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:33.930720  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:33.930738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:33.930743  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:33.973954  152463 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0826 12:10:33.974005  152463 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:33.974042  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:33.974059  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:33.974053  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:34.013541  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.013571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.013542  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.053966  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.053985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.068414  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.116750  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.116778  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0826 12:10:34.164943  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0826 12:10:34.172957  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0826 12:10:34.204571  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.230985  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0826 12:10:34.236650  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0826 12:10:34.270826  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0826 12:10:34.270990  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.304050  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0826 12:10:34.304147  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:34.308251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0826 12:10:34.308374  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:34.335314  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0826 12:10:34.348389  152463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.351251  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0826 12:10:34.351376  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:34.359812  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0826 12:10:34.359842  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0826 12:10:34.359863  152463 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.359891  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0826 12:10:34.359921  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0826 12:10:34.359948  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:34.359952  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0826 12:10:34.400500  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0826 12:10:34.400644  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:34.428715  152463 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0826 12:10:34.428758  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0826 12:10:34.428776  152463 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:34.428802  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0826 12:10:34.428855  152463 ssh_runner.go:195] Run: which crictl
	I0826 12:10:31.586509  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:31.586539  153366 pod_ready.go:82] duration metric: took 5.006322441s for pod "coredns-6f6b679f8f-d5f9l" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:31.586549  153366 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:33.593060  153366 pod_ready.go:103] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:34.092728  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:34.092762  153366 pod_ready.go:82] duration metric: took 2.506204888s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:34.092775  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:32.130190  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:32.630331  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.129323  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:33.629368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.129667  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.629421  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.130330  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:35.630142  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.130340  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:36.629400  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:34.205203  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.704302  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:36.449383  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.089320181s)
	I0826 12:10:36.449436  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0826 12:10:36.449447  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.048765538s)
	I0826 12:10:36.449467  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449481  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0826 12:10:36.449509  152463 ssh_runner.go:235] Completed: which crictl: (2.020634497s)
	I0826 12:10:36.449536  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0826 12:10:36.449568  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.427527  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.977941403s)
	I0826 12:10:38.427585  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0826 12:10:38.427610  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427529  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.977935335s)
	I0826 12:10:38.427668  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0826 12:10:38.427738  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:38.466259  152463 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:10:36.100135  153366 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.100269  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.100296  153366 pod_ready.go:82] duration metric: took 3.007513255s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.100308  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105634  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.105658  153366 pod_ready.go:82] duration metric: took 5.341415ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.105668  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110911  153366 pod_ready.go:93] pod "kube-proxy-lvsx9" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.110939  153366 pod_ready.go:82] duration metric: took 5.263436ms for pod "kube-proxy-lvsx9" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.110950  153366 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115725  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:37.115752  153366 pod_ready.go:82] duration metric: took 4.79279ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:37.115765  153366 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:39.122469  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:37.130309  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:37.629548  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.129413  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.629384  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.130354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:39.629474  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.129901  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:40.629362  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.129862  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:41.629811  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:38.704541  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.704598  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.705026  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:40.616557  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.188857601s)
	I0826 12:10:40.616588  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0826 12:10:40.616614  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616634  152463 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.150337121s)
	I0826 12:10:40.616669  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0826 12:10:40.616769  152463 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0826 12:10:40.616885  152463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:42.472543  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.855842642s)
	I0826 12:10:42.472583  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0826 12:10:42.472586  152463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.855677168s)
	I0826 12:10:42.472620  152463 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0826 12:10:42.472625  152463 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:42.472702  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0826 12:10:44.419974  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.947236189s)
	I0826 12:10:44.420011  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0826 12:10:44.420041  152463 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:44.420097  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0826 12:10:41.122741  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:43.123416  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:45.623931  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:42.130334  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:42.630068  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.130212  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:43.629443  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.130067  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:44.629805  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.129753  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.629806  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.129401  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:46.630125  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:45.203266  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.205125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:48.038017  152463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.617897174s)
	I0826 12:10:48.038048  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0826 12:10:48.038073  152463 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.038114  152463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0826 12:10:48.693199  152463 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19501-99403/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0826 12:10:48.693251  152463 cache_images.go:123] Successfully loaded all cached images
	I0826 12:10:48.693259  152463 cache_images.go:92] duration metric: took 15.214324574s to LoadCachedImages
	I0826 12:10:48.693274  152463 kubeadm.go:934] updating node { 192.168.50.213 8443 v1.31.0 crio true true} ...
	I0826 12:10:48.693392  152463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-956479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0826 12:10:48.693481  152463 ssh_runner.go:195] Run: crio config
	I0826 12:10:48.748151  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:48.748176  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:48.748185  152463 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0826 12:10:48.748210  152463 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956479 NodeName:no-preload-956479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0826 12:10:48.748387  152463 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956479"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0826 12:10:48.748458  152463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0826 12:10:48.759020  152463 binaries.go:44] Found k8s binaries, skipping transfer
	I0826 12:10:48.759097  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0826 12:10:48.768345  152463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0826 12:10:48.784233  152463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0826 12:10:48.800236  152463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0826 12:10:48.819243  152463 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0826 12:10:48.823154  152463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0826 12:10:48.835973  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:10:48.959506  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:10:48.977413  152463 certs.go:68] Setting up /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479 for IP: 192.168.50.213
	I0826 12:10:48.977437  152463 certs.go:194] generating shared ca certs ...
	I0826 12:10:48.977458  152463 certs.go:226] acquiring lock for ca certs: {Name:mk196d80d16d57be334e9216621ba36f5c556af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:10:48.977653  152463 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key
	I0826 12:10:48.977714  152463 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key
	I0826 12:10:48.977725  152463 certs.go:256] generating profile certs ...
	I0826 12:10:48.977827  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.key
	I0826 12:10:48.977903  152463 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key.5be91d7c
	I0826 12:10:48.977952  152463 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key
	I0826 12:10:48.978094  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem (1338 bytes)
	W0826 12:10:48.978136  152463 certs.go:480] ignoring /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598_empty.pem, impossibly tiny 0 bytes
	I0826 12:10:48.978149  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca-key.pem (1675 bytes)
	I0826 12:10:48.978183  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/ca.pem (1078 bytes)
	I0826 12:10:48.978221  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/cert.pem (1123 bytes)
	I0826 12:10:48.978252  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/certs/key.pem (1679 bytes)
	I0826 12:10:48.978305  152463 certs.go:484] found cert: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem (1708 bytes)
	I0826 12:10:48.978975  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0826 12:10:49.029725  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0826 12:10:49.077908  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0826 12:10:49.112813  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0826 12:10:49.157768  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0826 12:10:49.201804  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0826 12:10:49.228271  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0826 12:10:49.256770  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0826 12:10:49.283073  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/ssl/certs/1065982.pem --> /usr/share/ca-certificates/1065982.pem (1708 bytes)
	I0826 12:10:49.316360  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0826 12:10:49.342284  152463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19501-99403/.minikube/certs/106598.pem --> /usr/share/ca-certificates/106598.pem (1338 bytes)
	I0826 12:10:49.368126  152463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0826 12:10:49.386334  152463 ssh_runner.go:195] Run: openssl version
	I0826 12:10:49.392457  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1065982.pem && ln -fs /usr/share/ca-certificates/1065982.pem /etc/ssl/certs/1065982.pem"
	I0826 12:10:49.404815  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410087  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 26 10:59 /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.410160  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1065982.pem
	I0826 12:10:49.416900  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1065982.pem /etc/ssl/certs/3ec20f2e.0"
	I0826 12:10:49.429893  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0826 12:10:49.442796  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448216  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 26 10:47 /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.448291  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0826 12:10:49.454416  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0826 12:10:49.466241  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106598.pem && ln -fs /usr/share/ca-certificates/106598.pem /etc/ssl/certs/106598.pem"
	I0826 12:10:49.477636  152463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482106  152463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 26 10:59 /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.482193  152463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106598.pem
	I0826 12:10:49.488191  152463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106598.pem /etc/ssl/certs/51391683.0"
	I0826 12:10:49.499538  152463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0826 12:10:49.504332  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0826 12:10:49.510908  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0826 12:10:49.517549  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0826 12:10:49.524925  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0826 12:10:49.531451  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0826 12:10:49.537617  152463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0826 12:10:49.543680  152463 kubeadm.go:392] StartCluster: {Name:no-preload-956479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-956479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 12:10:49.543776  152463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0826 12:10:49.543843  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.587049  152463 cri.go:89] found id: ""
	I0826 12:10:49.587142  152463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0826 12:10:49.597911  152463 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0826 12:10:49.597936  152463 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0826 12:10:49.598001  152463 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0826 12:10:49.607974  152463 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0826 12:10:49.608976  152463 kubeconfig.go:125] found "no-preload-956479" server: "https://192.168.50.213:8443"
	I0826 12:10:49.611217  152463 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0826 12:10:49.622647  152463 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I0826 12:10:49.622689  152463 kubeadm.go:1160] stopping kube-system containers ...
	I0826 12:10:49.622706  152463 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0826 12:10:49.623002  152463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0826 12:10:49.662463  152463 cri.go:89] found id: ""
	I0826 12:10:49.662549  152463 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0826 12:10:49.681134  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:10:49.691425  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:10:49.691452  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:10:49.691512  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:10:49.701389  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:10:49.701474  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:10:49.713195  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:10:49.722708  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:10:49.722792  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:10:49.732905  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:10:49.742726  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:10:49.742814  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:10:48.123021  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:50.123270  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:47.129441  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:47.629637  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.129381  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:48.630027  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.129789  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.630022  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.130252  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:50.630145  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.129544  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.629646  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:49.704947  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:51.705172  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:49.752415  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:10:49.761573  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:10:49.761667  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:10:49.771209  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:10:49.781057  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:49.889287  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.424782  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.640186  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.713706  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:50.834409  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:10:50.834516  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.335630  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.834665  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:51.851569  152463 api_server.go:72] duration metric: took 1.01717469s to wait for apiserver process to appear ...
	I0826 12:10:51.851601  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:10:51.851626  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:51.852167  152463 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0826 12:10:52.351709  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.441177  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.441210  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:54.441223  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.451907  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0826 12:10:54.451937  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0826 12:10:52.623200  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.122552  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:54.852737  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:54.857641  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:54.857740  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.351825  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.356325  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0826 12:10:55.356364  152463 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0826 12:10:55.851867  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:10:55.858081  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:10:55.865811  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:10:55.865843  152463 api_server.go:131] duration metric: took 4.014234103s to wait for apiserver health ...
	I0826 12:10:55.865853  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:10:55.865861  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:10:55.867818  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:10:52.129473  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:52.629868  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.129585  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:53.629893  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.129446  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.629722  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.130173  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:55.629968  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.129994  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:56.629422  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:54.203474  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:56.204271  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:55.869434  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:10:55.881376  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:10:55.935418  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:10:55.955678  152463 system_pods.go:59] 8 kube-system pods found
	I0826 12:10:55.955721  152463 system_pods.go:61] "coredns-6f6b679f8f-s9685" [b6fca294-8a78-4f7c-a466-11c76362874a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:10:55.955732  152463 system_pods.go:61] "etcd-no-preload-956479" [96da9402-8ea6-4418-892d-7691ab60a10d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0826 12:10:55.955744  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [5fe3eb03-a50c-4a7b-8c50-37262f1b165f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0826 12:10:55.955752  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [362950c9-4466-413e-8248-053fe4d698a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0826 12:10:55.955759  152463 system_pods.go:61] "kube-proxy-kwpqw" [023fc9f9-538e-43d0-a484-e2f4c75c7f34] Running
	I0826 12:10:55.955769  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [d24580b2-8a37-4aaa-8d9d-66f9eb3e0c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0826 12:10:55.955777  152463 system_pods.go:61] "metrics-server-6867b74b74-ldgsl" [264e96c8-430f-40fc-bb9c-7588cc28bc6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:10:55.955787  152463 system_pods.go:61] "storage-provisioner" [de97d99d-eda7-4ae4-8051-2fc34a2fe630] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0826 12:10:55.955803  152463 system_pods.go:74] duration metric: took 20.359455ms to wait for pod list to return data ...
	I0826 12:10:55.955815  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:10:55.972694  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:10:55.972741  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:10:55.972756  152463 node_conditions.go:105] duration metric: took 16.934705ms to run NodePressure ...
	I0826 12:10:55.972781  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0826 12:10:56.283383  152463 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288149  152463 kubeadm.go:739] kubelet initialised
	I0826 12:10:56.288173  152463 kubeadm.go:740] duration metric: took 4.75919ms waiting for restarted kubelet to initialise ...
	I0826 12:10:56.288183  152463 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:10:56.292852  152463 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.297832  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297858  152463 pod_ready.go:82] duration metric: took 4.980322ms for pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.297868  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "coredns-6f6b679f8f-s9685" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.297876  152463 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.302936  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302971  152463 pod_ready.go:82] duration metric: took 5.08663ms for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.302987  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "etcd-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.302995  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.313684  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313719  152463 pod_ready.go:82] duration metric: took 10.716576ms for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.313733  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-apiserver-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.313742  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.339570  152463 pod_ready.go:98] node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339604  152463 pod_ready.go:82] duration metric: took 25.849085ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	E0826 12:10:56.339613  152463 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-956479" hosting pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-956479" has status "Ready":"False"
	I0826 12:10:56.339620  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738759  152463 pod_ready.go:93] pod "kube-proxy-kwpqw" in "kube-system" namespace has status "Ready":"True"
	I0826 12:10:56.738786  152463 pod_ready.go:82] duration metric: took 399.156996ms for pod "kube-proxy-kwpqw" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:56.738798  152463 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:10:58.745103  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.623412  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.123226  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:10:57.129363  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:57.629878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.129406  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.629611  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.130209  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:59.629354  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.130004  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:00.629599  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.129324  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:01.629623  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:10:58.705336  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:01.206112  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:00.746646  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.748453  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.623495  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:04.623650  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:02.129756  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:02.630078  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:02.630168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:02.668634  152982 cri.go:89] found id: ""
	I0826 12:11:02.668665  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.668673  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:02.668680  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:02.668736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:02.707481  152982 cri.go:89] found id: ""
	I0826 12:11:02.707513  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.707524  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:02.707531  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:02.707600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:02.742412  152982 cri.go:89] found id: ""
	I0826 12:11:02.742441  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.742452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:02.742459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:02.742524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:02.783334  152982 cri.go:89] found id: ""
	I0826 12:11:02.783363  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.783374  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:02.783383  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:02.783442  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:02.819550  152982 cri.go:89] found id: ""
	I0826 12:11:02.819578  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.819586  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:02.819592  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:02.819668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:02.857381  152982 cri.go:89] found id: ""
	I0826 12:11:02.857418  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.857429  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:02.857439  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:02.857508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:02.891198  152982 cri.go:89] found id: ""
	I0826 12:11:02.891231  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.891242  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:02.891249  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:02.891328  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:02.925819  152982 cri.go:89] found id: ""
	I0826 12:11:02.925847  152982 logs.go:276] 0 containers: []
	W0826 12:11:02.925856  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:02.925867  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:02.925881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:03.061241  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:03.061287  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:03.061306  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:03.132324  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:03.132364  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:03.176590  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:03.176623  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.229320  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:03.229366  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:05.744686  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:05.758429  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:05.758517  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:05.799162  152982 cri.go:89] found id: ""
	I0826 12:11:05.799200  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.799209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:05.799216  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:05.799270  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:05.839302  152982 cri.go:89] found id: ""
	I0826 12:11:05.839341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.839354  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:05.839362  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:05.839438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:05.900064  152982 cri.go:89] found id: ""
	I0826 12:11:05.900094  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.900102  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:05.900108  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:05.900168  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:05.938314  152982 cri.go:89] found id: ""
	I0826 12:11:05.938341  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.938350  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:05.938356  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:05.938423  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:05.975644  152982 cri.go:89] found id: ""
	I0826 12:11:05.975679  152982 logs.go:276] 0 containers: []
	W0826 12:11:05.975691  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:05.975699  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:05.975775  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:06.012720  152982 cri.go:89] found id: ""
	I0826 12:11:06.012752  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.012764  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:06.012772  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:06.012848  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:06.048613  152982 cri.go:89] found id: ""
	I0826 12:11:06.048648  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.048656  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:06.048662  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:06.048717  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:06.083136  152982 cri.go:89] found id: ""
	I0826 12:11:06.083171  152982 logs.go:276] 0 containers: []
	W0826 12:11:06.083183  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:06.083195  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:06.083213  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:06.096570  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:06.096616  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:06.172561  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:06.172588  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:06.172605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:06.252039  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:06.252081  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:06.291076  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:06.291109  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:03.705538  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:06.203800  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:05.245839  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.744844  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.745230  152463 pod_ready.go:103] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:07.123518  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:09.124421  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:08.838693  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:08.853160  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:08.853246  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:08.893024  152982 cri.go:89] found id: ""
	I0826 12:11:08.893058  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.893072  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:08.893083  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:08.893157  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:08.929621  152982 cri.go:89] found id: ""
	I0826 12:11:08.929660  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.929669  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:08.929675  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:08.929744  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:08.965488  152982 cri.go:89] found id: ""
	I0826 12:11:08.965526  152982 logs.go:276] 0 containers: []
	W0826 12:11:08.965541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:08.965550  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:08.965622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:09.001467  152982 cri.go:89] found id: ""
	I0826 12:11:09.001503  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.001515  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:09.001525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:09.001587  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:09.037865  152982 cri.go:89] found id: ""
	I0826 12:11:09.037898  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.037907  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:09.037914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:09.037973  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:09.074537  152982 cri.go:89] found id: ""
	I0826 12:11:09.074571  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.074582  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:09.074591  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:09.074665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:09.111517  152982 cri.go:89] found id: ""
	I0826 12:11:09.111550  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.111561  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:09.111569  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:09.111635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:09.151005  152982 cri.go:89] found id: ""
	I0826 12:11:09.151039  152982 logs.go:276] 0 containers: []
	W0826 12:11:09.151050  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:09.151062  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:09.151079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:09.231625  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:09.231666  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:09.277642  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:09.277685  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:09.326772  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:09.326814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:09.341764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:09.341802  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:09.419087  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:08.203869  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.206872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:12.703516  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:10.246459  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:11:10.246503  152463 pod_ready.go:82] duration metric: took 13.507695458s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:10.246520  152463 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	I0826 12:11:12.254439  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:14.752278  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.126604  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:13.622382  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:15.622915  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:11.920246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:11.933973  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:11.934070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:11.971020  152982 cri.go:89] found id: ""
	I0826 12:11:11.971055  152982 logs.go:276] 0 containers: []
	W0826 12:11:11.971067  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:11.971076  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:11.971147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:12.005639  152982 cri.go:89] found id: ""
	I0826 12:11:12.005669  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.005679  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:12.005687  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:12.005757  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:12.039823  152982 cri.go:89] found id: ""
	I0826 12:11:12.039856  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.039868  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:12.039877  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:12.039954  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:12.075646  152982 cri.go:89] found id: ""
	I0826 12:11:12.075690  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.075702  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:12.075710  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:12.075814  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:12.113810  152982 cri.go:89] found id: ""
	I0826 12:11:12.113838  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.113846  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:12.113852  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:12.113927  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:12.150373  152982 cri.go:89] found id: ""
	I0826 12:11:12.150405  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.150415  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:12.150421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:12.150478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:12.186325  152982 cri.go:89] found id: ""
	I0826 12:11:12.186362  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.186373  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:12.186381  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:12.186444  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:12.221346  152982 cri.go:89] found id: ""
	I0826 12:11:12.221380  152982 logs.go:276] 0 containers: []
	W0826 12:11:12.221392  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:12.221405  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:12.221423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:12.279964  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:12.280006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:12.297102  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:12.297134  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:12.391568  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:12.391593  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:12.391608  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:12.472218  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:12.472259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.012974  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:15.026480  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:15.026553  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:15.060748  152982 cri.go:89] found id: ""
	I0826 12:11:15.060779  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.060787  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:15.060792  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:15.060842  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:15.095611  152982 cri.go:89] found id: ""
	I0826 12:11:15.095644  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.095668  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:15.095683  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:15.095759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:15.130644  152982 cri.go:89] found id: ""
	I0826 12:11:15.130681  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.130692  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:15.130700  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:15.130773  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:15.164343  152982 cri.go:89] found id: ""
	I0826 12:11:15.164375  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.164383  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:15.164391  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:15.164468  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:15.203801  152982 cri.go:89] found id: ""
	I0826 12:11:15.203835  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.203847  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:15.203855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:15.203935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:15.236428  152982 cri.go:89] found id: ""
	I0826 12:11:15.236455  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.236465  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:15.236474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:15.236546  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:15.271307  152982 cri.go:89] found id: ""
	I0826 12:11:15.271345  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.271357  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:15.271365  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:15.271449  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:15.306164  152982 cri.go:89] found id: ""
	I0826 12:11:15.306194  152982 logs.go:276] 0 containers: []
	W0826 12:11:15.306203  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:15.306214  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:15.306228  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:15.319277  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:15.319311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:15.389821  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:15.389853  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:15.389874  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:15.466002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:15.466045  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:15.506591  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:15.506626  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:14.703938  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.704084  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:16.753630  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:19.252388  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.124351  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:20.621827  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:18.061033  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:18.084401  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:18.084478  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:18.127327  152982 cri.go:89] found id: ""
	I0826 12:11:18.127360  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.127371  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:18.127380  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:18.127451  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:18.163215  152982 cri.go:89] found id: ""
	I0826 12:11:18.163249  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.163261  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:18.163270  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:18.163330  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:18.198205  152982 cri.go:89] found id: ""
	I0826 12:11:18.198232  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.198241  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:18.198250  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:18.198322  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:18.233245  152982 cri.go:89] found id: ""
	I0826 12:11:18.233279  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.233291  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:18.233299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:18.233366  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:18.266761  152982 cri.go:89] found id: ""
	I0826 12:11:18.266802  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.266825  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:18.266855  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:18.266932  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:18.301705  152982 cri.go:89] found id: ""
	I0826 12:11:18.301744  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.301755  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:18.301764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:18.301825  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:18.339384  152982 cri.go:89] found id: ""
	I0826 12:11:18.339413  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.339422  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:18.339428  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:18.339486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:18.374435  152982 cri.go:89] found id: ""
	I0826 12:11:18.374467  152982 logs.go:276] 0 containers: []
	W0826 12:11:18.374475  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:18.374485  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:18.374498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:18.414453  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:18.414506  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:18.468667  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:18.468712  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:18.483366  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:18.483399  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:18.554900  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:18.554930  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:18.554948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.135828  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:21.148610  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:21.148690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:21.184455  152982 cri.go:89] found id: ""
	I0826 12:11:21.184484  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.184494  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:21.184503  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:21.184572  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:21.219762  152982 cri.go:89] found id: ""
	I0826 12:11:21.219808  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.219821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:21.219829  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:21.219914  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:21.258106  152982 cri.go:89] found id: ""
	I0826 12:11:21.258136  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.258147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:21.258154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:21.258221  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:21.293698  152982 cri.go:89] found id: ""
	I0826 12:11:21.293741  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.293753  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:21.293764  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:21.293841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:21.328069  152982 cri.go:89] found id: ""
	I0826 12:11:21.328101  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.328115  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:21.328123  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:21.328191  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:21.363723  152982 cri.go:89] found id: ""
	I0826 12:11:21.363757  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.363767  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:21.363776  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:21.363843  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:21.398321  152982 cri.go:89] found id: ""
	I0826 12:11:21.398349  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.398358  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:21.398364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:21.398428  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:21.434139  152982 cri.go:89] found id: ""
	I0826 12:11:21.434169  152982 logs.go:276] 0 containers: []
	W0826 12:11:21.434177  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:21.434189  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:21.434211  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:21.488855  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:21.488900  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:21.503146  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:21.503186  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:21.576190  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:21.576212  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:21.576226  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:21.660280  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:21.660330  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:19.203558  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.704020  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:21.254119  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:23.752986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:22.622972  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.623227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:24.205285  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:24.219929  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:24.220056  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:24.263032  152982 cri.go:89] found id: ""
	I0826 12:11:24.263064  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.263076  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:24.263084  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:24.263154  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:24.301435  152982 cri.go:89] found id: ""
	I0826 12:11:24.301469  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.301479  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:24.301486  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:24.301557  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:24.337463  152982 cri.go:89] found id: ""
	I0826 12:11:24.337494  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.337505  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:24.337513  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:24.337589  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:24.375142  152982 cri.go:89] found id: ""
	I0826 12:11:24.375181  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.375192  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:24.375201  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:24.375277  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:24.414859  152982 cri.go:89] found id: ""
	I0826 12:11:24.414891  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.414902  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:24.414910  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:24.414980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:24.453757  152982 cri.go:89] found id: ""
	I0826 12:11:24.453801  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.453826  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:24.453836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:24.453936  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:24.489558  152982 cri.go:89] found id: ""
	I0826 12:11:24.489592  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.489601  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:24.489606  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:24.489659  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:24.525054  152982 cri.go:89] found id: ""
	I0826 12:11:24.525086  152982 logs.go:276] 0 containers: []
	W0826 12:11:24.525097  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:24.525109  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:24.525131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:24.596120  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:24.596147  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:24.596162  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:24.671993  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:24.672040  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:24.714108  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:24.714139  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:24.764937  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:24.764979  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:23.704101  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.704765  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:25.759905  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:28.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.121723  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:29.122568  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:27.280105  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:27.293479  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:27.293569  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:27.335432  152982 cri.go:89] found id: ""
	I0826 12:11:27.335464  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.335477  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:27.335485  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:27.335565  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:27.371729  152982 cri.go:89] found id: ""
	I0826 12:11:27.371763  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.371774  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:27.371783  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:27.371857  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:27.408210  152982 cri.go:89] found id: ""
	I0826 12:11:27.408238  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.408250  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:27.408258  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:27.408327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:27.442135  152982 cri.go:89] found id: ""
	I0826 12:11:27.442170  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.442186  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:27.442196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:27.442266  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:27.476245  152982 cri.go:89] found id: ""
	I0826 12:11:27.476279  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.476290  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:27.476299  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:27.476421  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:27.510917  152982 cri.go:89] found id: ""
	I0826 12:11:27.510949  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.510958  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:27.510965  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:27.511033  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:27.552891  152982 cri.go:89] found id: ""
	I0826 12:11:27.552925  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.552933  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:27.552939  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:27.552996  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:27.588303  152982 cri.go:89] found id: ""
	I0826 12:11:27.588339  152982 logs.go:276] 0 containers: []
	W0826 12:11:27.588354  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:27.588365  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:27.588383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:27.666493  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:27.666540  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:27.710139  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:27.710176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:27.761327  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:27.761368  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:27.775628  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:27.775667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:27.851736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.351953  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:30.365614  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:30.365705  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:30.400100  152982 cri.go:89] found id: ""
	I0826 12:11:30.400130  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.400140  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:30.400148  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:30.400224  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:30.433714  152982 cri.go:89] found id: ""
	I0826 12:11:30.433746  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.433762  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:30.433770  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:30.433841  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:30.467434  152982 cri.go:89] found id: ""
	I0826 12:11:30.467465  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.467475  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:30.467482  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:30.467549  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:30.501079  152982 cri.go:89] found id: ""
	I0826 12:11:30.501115  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.501128  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:30.501136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:30.501195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:30.536521  152982 cri.go:89] found id: ""
	I0826 12:11:30.536556  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.536568  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:30.536576  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:30.536649  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:30.572998  152982 cri.go:89] found id: ""
	I0826 12:11:30.573030  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.573040  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:30.573048  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:30.573116  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:30.608982  152982 cri.go:89] found id: ""
	I0826 12:11:30.609017  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.609028  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:30.609035  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:30.609110  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:30.648780  152982 cri.go:89] found id: ""
	I0826 12:11:30.648812  152982 logs.go:276] 0 containers: []
	W0826 12:11:30.648824  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:30.648837  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:30.648853  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:30.705822  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:30.705859  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:30.719927  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:30.719956  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:30.799604  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:30.799633  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:30.799650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:30.876392  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:30.876438  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:28.203982  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.204105  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.703547  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:30.255268  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:32.753846  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:31.622470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.623169  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:33.417878  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:33.431323  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:33.431416  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:33.466166  152982 cri.go:89] found id: ""
	I0826 12:11:33.466195  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.466204  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:33.466215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:33.466292  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:33.504322  152982 cri.go:89] found id: ""
	I0826 12:11:33.504351  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.504360  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:33.504367  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:33.504429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:33.542292  152982 cri.go:89] found id: ""
	I0826 12:11:33.542324  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.542332  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:33.542340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:33.542408  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:33.577794  152982 cri.go:89] found id: ""
	I0826 12:11:33.577827  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.577835  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:33.577841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:33.577901  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:33.611525  152982 cri.go:89] found id: ""
	I0826 12:11:33.611561  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.611571  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:33.611580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:33.611661  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:33.650920  152982 cri.go:89] found id: ""
	I0826 12:11:33.650954  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.650966  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:33.650974  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:33.651043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:33.688349  152982 cri.go:89] found id: ""
	I0826 12:11:33.688389  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.688401  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:33.688409  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:33.688479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:33.726501  152982 cri.go:89] found id: ""
	I0826 12:11:33.726533  152982 logs.go:276] 0 containers: []
	W0826 12:11:33.726542  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:33.726553  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:33.726570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:33.740359  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:33.740392  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:33.810992  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:33.811018  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:33.811030  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:33.895742  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:33.895786  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:33.934059  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:33.934090  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.490917  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:36.503916  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:36.504000  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:36.539493  152982 cri.go:89] found id: ""
	I0826 12:11:36.539521  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.539529  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:36.539535  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:36.539597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:36.575517  152982 cri.go:89] found id: ""
	I0826 12:11:36.575556  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.575567  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:36.575576  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:36.575647  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:36.611750  152982 cri.go:89] found id: ""
	I0826 12:11:36.611790  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.611803  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:36.611812  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:36.611880  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:36.649512  152982 cri.go:89] found id: ""
	I0826 12:11:36.649548  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.649561  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:36.649575  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:36.649656  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:36.686741  152982 cri.go:89] found id: ""
	I0826 12:11:36.686774  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.686784  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:36.686791  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:36.686879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:35.204399  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:37.206473  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:34.753931  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.754270  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:39.253118  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.122628  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:38.122940  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:40.623071  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:36.723395  152982 cri.go:89] found id: ""
	I0826 12:11:36.723423  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.723431  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:36.723438  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:36.723503  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:36.761858  152982 cri.go:89] found id: ""
	I0826 12:11:36.761895  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.761906  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:36.761914  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:36.761987  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:36.797265  152982 cri.go:89] found id: ""
	I0826 12:11:36.797297  152982 logs.go:276] 0 containers: []
	W0826 12:11:36.797305  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:36.797315  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:36.797331  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:36.849263  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:36.849313  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:36.863273  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:36.863305  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:36.935214  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:36.935241  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:36.935259  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:37.011799  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:37.011845  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.550075  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:39.563363  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:39.563441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:39.597015  152982 cri.go:89] found id: ""
	I0826 12:11:39.597049  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.597061  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:39.597068  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:39.597138  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:39.634936  152982 cri.go:89] found id: ""
	I0826 12:11:39.634976  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.634988  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:39.634996  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:39.635070  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:39.670376  152982 cri.go:89] found id: ""
	I0826 12:11:39.670406  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.670414  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:39.670421  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:39.670479  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:39.706468  152982 cri.go:89] found id: ""
	I0826 12:11:39.706497  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.706504  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:39.706510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:39.706601  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:39.741133  152982 cri.go:89] found id: ""
	I0826 12:11:39.741166  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.741178  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:39.741187  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:39.741261  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:39.776398  152982 cri.go:89] found id: ""
	I0826 12:11:39.776436  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.776449  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:39.776460  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:39.776533  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:39.811257  152982 cri.go:89] found id: ""
	I0826 12:11:39.811291  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.811305  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:39.811314  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:39.811394  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:39.845825  152982 cri.go:89] found id: ""
	I0826 12:11:39.845858  152982 logs.go:276] 0 containers: []
	W0826 12:11:39.845880  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:39.845893  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:39.845912  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:39.886439  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:39.886481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:39.936942  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:39.936985  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:39.950459  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:39.950494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:40.022791  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:40.022820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:40.022851  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:39.705276  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.705617  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:41.253680  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.753495  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:43.122503  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:45.123917  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:42.602146  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:42.615049  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:42.615124  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:42.655338  152982 cri.go:89] found id: ""
	I0826 12:11:42.655369  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.655377  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:42.655383  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:42.655438  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:42.692964  152982 cri.go:89] found id: ""
	I0826 12:11:42.693001  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.693012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:42.693020  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:42.693095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:42.730011  152982 cri.go:89] found id: ""
	I0826 12:11:42.730040  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.730049  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:42.730055  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:42.730119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:42.765304  152982 cri.go:89] found id: ""
	I0826 12:11:42.765333  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.765341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:42.765348  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:42.765406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:42.805860  152982 cri.go:89] found id: ""
	I0826 12:11:42.805900  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.805912  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:42.805921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:42.805984  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:42.844736  152982 cri.go:89] found id: ""
	I0826 12:11:42.844768  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.844779  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:42.844789  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:42.844855  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:42.879760  152982 cri.go:89] found id: ""
	I0826 12:11:42.879790  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.879801  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:42.879809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:42.879873  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:42.918512  152982 cri.go:89] found id: ""
	I0826 12:11:42.918580  152982 logs.go:276] 0 containers: []
	W0826 12:11:42.918595  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:42.918619  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:42.918640  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:42.971381  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:42.971423  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:42.986027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:42.986069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:43.058511  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:43.058533  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:43.058548  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:43.137904  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:43.137948  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:45.683127  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:45.697237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:45.697323  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:45.737944  152982 cri.go:89] found id: ""
	I0826 12:11:45.737977  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.737989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:45.737997  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:45.738069  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:45.775940  152982 cri.go:89] found id: ""
	I0826 12:11:45.775972  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.775980  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:45.775991  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:45.776047  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:45.811609  152982 cri.go:89] found id: ""
	I0826 12:11:45.811647  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.811658  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:45.811666  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:45.811747  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:45.845566  152982 cri.go:89] found id: ""
	I0826 12:11:45.845600  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.845612  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:45.845620  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:45.845698  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:45.880243  152982 cri.go:89] found id: ""
	I0826 12:11:45.880287  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.880300  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:45.880310  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:45.880406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:45.916121  152982 cri.go:89] found id: ""
	I0826 12:11:45.916150  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.916161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:45.916170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:45.916242  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:45.950397  152982 cri.go:89] found id: ""
	I0826 12:11:45.950430  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.950441  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:45.950449  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:45.950524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:45.987306  152982 cri.go:89] found id: ""
	I0826 12:11:45.987350  152982 logs.go:276] 0 containers: []
	W0826 12:11:45.987363  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:45.987394  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:45.987435  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:46.044580  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:46.044632  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:46.059612  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:46.059648  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:46.133348  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:46.133377  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:46.133396  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:46.217841  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:46.217890  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:44.203535  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.703738  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:46.252936  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.753329  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:47.623134  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:49.628072  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:48.758749  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:48.772086  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:48.772172  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:48.806520  152982 cri.go:89] found id: ""
	I0826 12:11:48.806552  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.806563  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:48.806573  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:48.806655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:48.844305  152982 cri.go:89] found id: ""
	I0826 12:11:48.844335  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.844343  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:48.844349  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:48.844409  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:48.882416  152982 cri.go:89] found id: ""
	I0826 12:11:48.882453  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.882462  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:48.882469  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:48.882523  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:48.917756  152982 cri.go:89] found id: ""
	I0826 12:11:48.917798  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.917811  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:48.917818  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:48.917882  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:48.951065  152982 cri.go:89] found id: ""
	I0826 12:11:48.951095  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.951107  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:48.951115  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:48.951185  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:48.984812  152982 cri.go:89] found id: ""
	I0826 12:11:48.984845  152982 logs.go:276] 0 containers: []
	W0826 12:11:48.984857  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:48.984865  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:48.984935  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:49.021449  152982 cri.go:89] found id: ""
	I0826 12:11:49.021483  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.021495  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:49.021505  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:49.021579  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:49.053543  152982 cri.go:89] found id: ""
	I0826 12:11:49.053584  152982 logs.go:276] 0 containers: []
	W0826 12:11:49.053596  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:49.053609  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:49.053625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:49.107227  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:49.107269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:49.121370  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:49.121402  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:49.192279  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:49.192323  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:49.192342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:49.267817  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:49.267861  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:49.204182  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.204589  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:50.753778  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.753986  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:52.122110  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:54.122701  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:51.805801  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:51.821042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:51.821119  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:51.863950  152982 cri.go:89] found id: ""
	I0826 12:11:51.863986  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.863999  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:51.864007  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:51.864082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:51.910582  152982 cri.go:89] found id: ""
	I0826 12:11:51.910621  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.910633  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:51.910649  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:51.910708  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:51.946964  152982 cri.go:89] found id: ""
	I0826 12:11:51.947001  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.947014  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:51.947022  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:51.947095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:51.982892  152982 cri.go:89] found id: ""
	I0826 12:11:51.982926  152982 logs.go:276] 0 containers: []
	W0826 12:11:51.982936  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:51.982944  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:51.983016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:52.017975  152982 cri.go:89] found id: ""
	I0826 12:11:52.018000  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.018009  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:52.018015  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:52.018082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:52.053286  152982 cri.go:89] found id: ""
	I0826 12:11:52.053315  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.053323  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:52.053329  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:52.053391  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:52.088088  152982 cri.go:89] found id: ""
	I0826 12:11:52.088131  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.088144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:52.088153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:52.088235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:52.125911  152982 cri.go:89] found id: ""
	I0826 12:11:52.125938  152982 logs.go:276] 0 containers: []
	W0826 12:11:52.125955  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:52.125967  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:52.125984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:52.167172  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:52.167208  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:52.222819  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:52.222871  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:52.237609  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:52.237650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:52.312439  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:52.312473  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:52.312491  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:54.892552  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:54.907733  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:54.907827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:54.945009  152982 cri.go:89] found id: ""
	I0826 12:11:54.945040  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.945050  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:54.945057  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:54.945128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:54.987578  152982 cri.go:89] found id: ""
	I0826 12:11:54.987608  152982 logs.go:276] 0 containers: []
	W0826 12:11:54.987619  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:54.987627  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:54.987702  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:55.021222  152982 cri.go:89] found id: ""
	I0826 12:11:55.021254  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.021266  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:55.021274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:55.021348  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:55.058906  152982 cri.go:89] found id: ""
	I0826 12:11:55.058933  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.058941  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:55.058948  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:55.059017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:55.094689  152982 cri.go:89] found id: ""
	I0826 12:11:55.094720  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.094727  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:55.094734  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:55.094808  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:55.133269  152982 cri.go:89] found id: ""
	I0826 12:11:55.133298  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.133306  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:55.133313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:55.133376  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:55.170456  152982 cri.go:89] found id: ""
	I0826 12:11:55.170491  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.170501  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:55.170510  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:55.170584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:55.205421  152982 cri.go:89] found id: ""
	I0826 12:11:55.205453  152982 logs.go:276] 0 containers: []
	W0826 12:11:55.205463  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:55.205474  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:55.205490  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:55.258635  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:55.258672  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:55.272799  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:55.272838  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:55.345916  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:11:55.345948  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:55.345966  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:55.421677  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:55.421716  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:53.205479  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.703014  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.704352  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:55.254310  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.753129  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:56.124191  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:58.622612  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:57.960895  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:11:57.974338  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:11:57.974429  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:11:58.010914  152982 cri.go:89] found id: ""
	I0826 12:11:58.010946  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.010955  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:11:58.010966  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:11:58.011046  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:11:58.046393  152982 cri.go:89] found id: ""
	I0826 12:11:58.046437  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.046451  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:11:58.046457  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:11:58.046512  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:11:58.081967  152982 cri.go:89] found id: ""
	I0826 12:11:58.081999  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.082008  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:11:58.082014  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:11:58.082074  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:11:58.118301  152982 cri.go:89] found id: ""
	I0826 12:11:58.118333  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.118344  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:11:58.118352  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:11:58.118420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:11:58.154991  152982 cri.go:89] found id: ""
	I0826 12:11:58.155022  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.155030  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:11:58.155036  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:11:58.155095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:11:58.192768  152982 cri.go:89] found id: ""
	I0826 12:11:58.192814  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.192827  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:11:58.192836  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:11:58.192911  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:11:58.230393  152982 cri.go:89] found id: ""
	I0826 12:11:58.230422  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.230433  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:11:58.230441  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:11:58.230510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:11:58.267899  152982 cri.go:89] found id: ""
	I0826 12:11:58.267935  152982 logs.go:276] 0 containers: []
	W0826 12:11:58.267947  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:11:58.267959  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:11:58.267976  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:11:58.357819  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:11:58.357866  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:11:58.405641  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:11:58.405682  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:11:58.458403  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:11:58.458446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:11:58.472170  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:11:58.472209  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:11:58.544141  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.044595  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:01.059636  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:01.059732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:01.099210  152982 cri.go:89] found id: ""
	I0826 12:12:01.099244  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.099252  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:01.099260  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:01.099315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:01.135865  152982 cri.go:89] found id: ""
	I0826 12:12:01.135895  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.135904  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:01.135915  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:01.135969  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:01.169745  152982 cri.go:89] found id: ""
	I0826 12:12:01.169775  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.169784  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:01.169790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:01.169844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:01.208386  152982 cri.go:89] found id: ""
	I0826 12:12:01.208419  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.208431  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:01.208440  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:01.208508  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:01.250695  152982 cri.go:89] found id: ""
	I0826 12:12:01.250727  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.250738  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:01.250746  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:01.250821  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:01.284796  152982 cri.go:89] found id: ""
	I0826 12:12:01.284825  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.284838  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:01.284845  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:01.284904  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:01.318188  152982 cri.go:89] found id: ""
	I0826 12:12:01.318219  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.318233  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:01.318242  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:01.318313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:01.354986  152982 cri.go:89] found id: ""
	I0826 12:12:01.355024  152982 logs.go:276] 0 containers: []
	W0826 12:12:01.355036  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:01.355055  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:01.355073  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:01.406575  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:01.406625  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:01.421246  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:01.421299  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:01.500127  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:01.500160  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:01.500178  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:01.579560  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:01.579605  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:00.202896  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.204136  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:11:59.758855  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:02.253583  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:01.123695  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:03.622227  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.124292  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:04.138317  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:04.138406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:04.172150  152982 cri.go:89] found id: ""
	I0826 12:12:04.172185  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.172197  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:04.172205  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:04.172281  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:04.206215  152982 cri.go:89] found id: ""
	I0826 12:12:04.206245  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.206253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:04.206259  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:04.206314  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:04.245728  152982 cri.go:89] found id: ""
	I0826 12:12:04.245766  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.245780  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:04.245797  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:04.245875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:04.288292  152982 cri.go:89] found id: ""
	I0826 12:12:04.288328  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.288341  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:04.288358  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:04.288420  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:04.323224  152982 cri.go:89] found id: ""
	I0826 12:12:04.323270  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.323279  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:04.323285  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:04.323353  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:04.356637  152982 cri.go:89] found id: ""
	I0826 12:12:04.356670  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.356681  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:04.356751  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:04.356829  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:04.397159  152982 cri.go:89] found id: ""
	I0826 12:12:04.397202  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.397217  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:04.397225  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:04.397307  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:04.443593  152982 cri.go:89] found id: ""
	I0826 12:12:04.443635  152982 logs.go:276] 0 containers: []
	W0826 12:12:04.443644  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:04.443654  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:04.443667  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:04.527790  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:04.527820  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:04.527840  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:04.603384  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:04.603426  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:04.642782  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:04.642818  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:04.692196  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:04.692239  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:04.704890  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.204192  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:04.753969  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.253318  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:09.253759  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:06.123014  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:08.622705  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:07.208845  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:07.221853  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:07.221925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:07.257184  152982 cri.go:89] found id: ""
	I0826 12:12:07.257220  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.257236  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:07.257244  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:07.257313  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:07.289962  152982 cri.go:89] found id: ""
	I0826 12:12:07.290000  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.290012  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:07.290018  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:07.290082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:07.323408  152982 cri.go:89] found id: ""
	I0826 12:12:07.323440  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.323452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:07.323461  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:07.323527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:07.358324  152982 cri.go:89] found id: ""
	I0826 12:12:07.358353  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.358362  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:07.358368  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:07.358436  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:07.393608  152982 cri.go:89] found id: ""
	I0826 12:12:07.393657  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.393666  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:07.393671  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:07.393739  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:07.427738  152982 cri.go:89] found id: ""
	I0826 12:12:07.427772  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.427782  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:07.427790  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:07.427879  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:07.466467  152982 cri.go:89] found id: ""
	I0826 12:12:07.466508  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.466520  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:07.466528  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:07.466603  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:07.501589  152982 cri.go:89] found id: ""
	I0826 12:12:07.501630  152982 logs.go:276] 0 containers: []
	W0826 12:12:07.501645  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:07.501658  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:07.501678  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:07.550668  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:07.550708  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:07.564191  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:07.564224  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:07.638593  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:07.638626  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:07.638645  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:07.722262  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:07.722311  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:10.265369  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:10.278719  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:10.278807  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:10.314533  152982 cri.go:89] found id: ""
	I0826 12:12:10.314568  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.314581  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:10.314589  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:10.314664  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:10.355983  152982 cri.go:89] found id: ""
	I0826 12:12:10.356014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.356023  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:10.356029  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:10.356091  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:10.391815  152982 cri.go:89] found id: ""
	I0826 12:12:10.391850  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.391860  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:10.391867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:10.391933  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:10.430280  152982 cri.go:89] found id: ""
	I0826 12:12:10.430309  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.430318  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:10.430324  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:10.430383  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:10.467983  152982 cri.go:89] found id: ""
	I0826 12:12:10.468014  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.468025  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:10.468034  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:10.468103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:10.501682  152982 cri.go:89] found id: ""
	I0826 12:12:10.501712  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.501720  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:10.501726  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:10.501779  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:10.536760  152982 cri.go:89] found id: ""
	I0826 12:12:10.536790  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.536802  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:10.536810  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:10.536885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:10.572626  152982 cri.go:89] found id: ""
	I0826 12:12:10.572663  152982 logs.go:276] 0 containers: []
	W0826 12:12:10.572677  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:10.572690  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:10.572707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:10.628207  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:10.628242  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:10.641767  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:10.641799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:10.716431  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:10.716463  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:10.716481  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:10.801367  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:10.801416  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:09.205156  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.704152  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.754090  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:14.253111  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:11.122118  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.123368  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:15.623046  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:13.346625  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:13.359838  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:13.359925  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:13.393199  152982 cri.go:89] found id: ""
	I0826 12:12:13.393228  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.393241  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:13.393249  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:13.393321  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:13.429651  152982 cri.go:89] found id: ""
	I0826 12:12:13.429696  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.429709  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:13.429718  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:13.429778  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:13.463913  152982 cri.go:89] found id: ""
	I0826 12:12:13.463947  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.463959  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:13.463967  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:13.464035  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:13.498933  152982 cri.go:89] found id: ""
	I0826 12:12:13.498966  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.498977  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:13.498987  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:13.499064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:13.535136  152982 cri.go:89] found id: ""
	I0826 12:12:13.535166  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.535177  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:13.535185  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:13.535260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:13.573468  152982 cri.go:89] found id: ""
	I0826 12:12:13.573504  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.573516  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:13.573525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:13.573597  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:13.612852  152982 cri.go:89] found id: ""
	I0826 12:12:13.612900  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.612913  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:13.612921  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:13.612994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:13.649176  152982 cri.go:89] found id: ""
	I0826 12:12:13.649204  152982 logs.go:276] 0 containers: []
	W0826 12:12:13.649220  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:13.649230  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:13.649247  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:13.663880  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:13.663908  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:13.741960  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:13.741982  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:13.741999  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:13.829286  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:13.829342  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:13.868186  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:13.868218  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.422802  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:16.436680  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:16.436759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:16.471551  152982 cri.go:89] found id: ""
	I0826 12:12:16.471585  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.471605  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:16.471623  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:16.471695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:16.507468  152982 cri.go:89] found id: ""
	I0826 12:12:16.507504  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.507517  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:16.507526  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:16.507600  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:16.542283  152982 cri.go:89] found id: ""
	I0826 12:12:16.542314  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.542325  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:16.542336  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:16.542406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:16.590390  152982 cri.go:89] found id: ""
	I0826 12:12:16.590429  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.590443  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:16.590452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:16.590593  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:16.625344  152982 cri.go:89] found id: ""
	I0826 12:12:16.625371  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.625382  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:16.625389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:16.625463  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:16.660153  152982 cri.go:89] found id: ""
	I0826 12:12:16.660194  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.660204  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:16.660211  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:16.660268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:16.696541  152982 cri.go:89] found id: ""
	I0826 12:12:16.696572  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.696580  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:16.696586  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:16.696655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:14.202993  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.204125  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.255066  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:18.752641  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:17.624099  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.122254  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:16.732416  152982 cri.go:89] found id: ""
	I0826 12:12:16.732448  152982 logs.go:276] 0 containers: []
	W0826 12:12:16.732456  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:16.732469  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:16.732486  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:16.809058  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:16.809106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:16.848200  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:16.848269  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:16.904985  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:16.905033  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:16.918966  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:16.919000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:16.989371  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.490349  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:19.502851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:19.502946  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:19.534939  152982 cri.go:89] found id: ""
	I0826 12:12:19.534966  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.534974  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:19.534981  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:19.535036  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:19.567128  152982 cri.go:89] found id: ""
	I0826 12:12:19.567161  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.567177  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:19.567185  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:19.567257  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:19.601548  152982 cri.go:89] found id: ""
	I0826 12:12:19.601580  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.601590  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:19.601598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:19.601670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:19.636903  152982 cri.go:89] found id: ""
	I0826 12:12:19.636930  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.636938  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:19.636949  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:19.637018  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:19.670155  152982 cri.go:89] found id: ""
	I0826 12:12:19.670181  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.670190  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:19.670196  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:19.670258  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:19.705052  152982 cri.go:89] found id: ""
	I0826 12:12:19.705079  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.705090  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:19.705099  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:19.705163  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:19.744106  152982 cri.go:89] found id: ""
	I0826 12:12:19.744136  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.744144  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:19.744151  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:19.744227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:19.780078  152982 cri.go:89] found id: ""
	I0826 12:12:19.780107  152982 logs.go:276] 0 containers: []
	W0826 12:12:19.780116  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:19.780126  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:19.780138  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:19.831821  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:19.831884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:19.847572  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:19.847610  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:19.924723  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:19.924745  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:19.924763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:20.001249  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:20.001292  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:18.204529  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.205670  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.703658  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:20.753284  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.753357  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.122490  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:24.122773  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:22.540357  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:22.554408  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:22.554483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:22.588270  152982 cri.go:89] found id: ""
	I0826 12:12:22.588298  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.588310  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:22.588329  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:22.588411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:22.623979  152982 cri.go:89] found id: ""
	I0826 12:12:22.624003  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.624011  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:22.624016  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:22.624077  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:22.657151  152982 cri.go:89] found id: ""
	I0826 12:12:22.657185  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.657196  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:22.657204  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:22.657265  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:22.694187  152982 cri.go:89] found id: ""
	I0826 12:12:22.694217  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.694229  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:22.694237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:22.694327  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:22.734911  152982 cri.go:89] found id: ""
	I0826 12:12:22.734948  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.734960  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:22.734968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:22.735039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:22.772754  152982 cri.go:89] found id: ""
	I0826 12:12:22.772790  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.772802  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:22.772809  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:22.772877  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:22.810340  152982 cri.go:89] found id: ""
	I0826 12:12:22.810376  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.810385  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:22.810392  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:22.810467  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:22.847910  152982 cri.go:89] found id: ""
	I0826 12:12:22.847942  152982 logs.go:276] 0 containers: []
	W0826 12:12:22.847953  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:22.847966  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:22.847984  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:22.900871  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:22.900927  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:22.914758  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:22.914790  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:22.981736  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:22.981766  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:22.981780  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:23.062669  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:23.062717  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:25.604600  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:25.617474  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:25.617584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:25.653870  152982 cri.go:89] found id: ""
	I0826 12:12:25.653904  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.653917  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:25.653925  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:25.653993  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:25.693937  152982 cri.go:89] found id: ""
	I0826 12:12:25.693965  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.693973  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:25.693979  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:25.694039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:25.730590  152982 cri.go:89] found id: ""
	I0826 12:12:25.730622  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.730633  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:25.730640  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:25.730729  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:25.768192  152982 cri.go:89] found id: ""
	I0826 12:12:25.768221  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.768231  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:25.768240  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:25.768296  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:25.808518  152982 cri.go:89] found id: ""
	I0826 12:12:25.808545  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.808553  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:25.808559  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:25.808622  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:25.843434  152982 cri.go:89] found id: ""
	I0826 12:12:25.843464  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.843475  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:25.843487  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:25.843561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:25.879093  152982 cri.go:89] found id: ""
	I0826 12:12:25.879124  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.879138  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:25.879146  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:25.879212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:25.915871  152982 cri.go:89] found id: ""
	I0826 12:12:25.915919  152982 logs.go:276] 0 containers: []
	W0826 12:12:25.915932  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:25.915945  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:25.915973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:25.998597  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:25.998652  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:26.038701  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:26.038736  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:26.091618  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:26.091665  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:26.105349  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:26.105383  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:26.178337  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:24.704209  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.204036  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:25.253322  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:27.754717  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:26.123520  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.622019  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.622453  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:28.679177  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:28.695361  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:28.695455  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:28.734977  152982 cri.go:89] found id: ""
	I0826 12:12:28.735008  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.735026  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:28.735032  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:28.735107  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:28.771634  152982 cri.go:89] found id: ""
	I0826 12:12:28.771665  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.771677  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:28.771685  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:28.771763  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:28.810976  152982 cri.go:89] found id: ""
	I0826 12:12:28.811010  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.811022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:28.811030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:28.811098  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:28.850204  152982 cri.go:89] found id: ""
	I0826 12:12:28.850233  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.850241  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:28.850247  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:28.850300  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:28.888814  152982 cri.go:89] found id: ""
	I0826 12:12:28.888845  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.888852  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:28.888862  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:28.888923  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:28.925203  152982 cri.go:89] found id: ""
	I0826 12:12:28.925251  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.925264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:28.925273  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:28.925359  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:28.963656  152982 cri.go:89] found id: ""
	I0826 12:12:28.963684  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.963700  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:28.963706  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:28.963761  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:28.997644  152982 cri.go:89] found id: ""
	I0826 12:12:28.997677  152982 logs.go:276] 0 containers: []
	W0826 12:12:28.997686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:28.997696  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:28.997711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:29.036668  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:29.036711  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:29.089020  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:29.089064  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:29.103051  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:29.103083  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:29.173327  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:29.173363  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:29.173380  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:29.703493  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.709124  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:30.252850  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:32.254087  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:33.121656  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:35.122979  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:31.755073  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:31.769098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:31.769194  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:31.811919  152982 cri.go:89] found id: ""
	I0826 12:12:31.811950  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.811970  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:31.811978  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:31.812059  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:31.849728  152982 cri.go:89] found id: ""
	I0826 12:12:31.849760  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.849771  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:31.849778  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:31.849844  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:31.884973  152982 cri.go:89] found id: ""
	I0826 12:12:31.885013  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.885022  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:31.885030  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:31.885088  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:31.925013  152982 cri.go:89] found id: ""
	I0826 12:12:31.925043  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.925052  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:31.925060  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:31.925121  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:31.960066  152982 cri.go:89] found id: ""
	I0826 12:12:31.960101  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.960112  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:31.960130  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:31.960205  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:31.994706  152982 cri.go:89] found id: ""
	I0826 12:12:31.994739  152982 logs.go:276] 0 containers: []
	W0826 12:12:31.994747  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:31.994753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:31.994810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:32.030101  152982 cri.go:89] found id: ""
	I0826 12:12:32.030134  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.030142  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:32.030148  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:32.030213  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:32.064056  152982 cri.go:89] found id: ""
	I0826 12:12:32.064087  152982 logs.go:276] 0 containers: []
	W0826 12:12:32.064095  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:32.064105  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:32.064118  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:32.115930  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:32.115974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:32.144522  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:32.144594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:32.216857  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:32.216886  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:32.216946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:32.293229  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:32.293268  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.833049  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:34.846325  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:34.846389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:34.879253  152982 cri.go:89] found id: ""
	I0826 12:12:34.879282  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.879299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:34.879308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:34.879377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:34.913351  152982 cri.go:89] found id: ""
	I0826 12:12:34.913381  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.913393  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:34.913401  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:34.913487  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:34.946929  152982 cri.go:89] found id: ""
	I0826 12:12:34.946958  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.946966  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:34.946972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:34.947040  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:34.980517  152982 cri.go:89] found id: ""
	I0826 12:12:34.980559  152982 logs.go:276] 0 containers: []
	W0826 12:12:34.980571  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:34.980580  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:34.980651  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:35.015853  152982 cri.go:89] found id: ""
	I0826 12:12:35.015886  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.015894  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:35.015909  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:35.015972  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:35.053568  152982 cri.go:89] found id: ""
	I0826 12:12:35.053597  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.053606  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:35.053613  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:35.053667  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:35.091369  152982 cri.go:89] found id: ""
	I0826 12:12:35.091398  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.091408  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:35.091415  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:35.091483  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:35.129233  152982 cri.go:89] found id: ""
	I0826 12:12:35.129259  152982 logs.go:276] 0 containers: []
	W0826 12:12:35.129267  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:35.129276  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:35.129288  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:35.181977  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:35.182016  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:35.195780  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:35.195812  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:35.274390  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:35.274416  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:35.274433  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:35.353774  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:35.353819  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:34.203244  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:36.703229  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:34.754010  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.253336  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.253674  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.622257  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:39.622967  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:37.894664  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:37.908390  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:37.908480  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:37.943642  152982 cri.go:89] found id: ""
	I0826 12:12:37.943669  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.943681  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:37.943689  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:37.943759  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:37.978371  152982 cri.go:89] found id: ""
	I0826 12:12:37.978407  152982 logs.go:276] 0 containers: []
	W0826 12:12:37.978418  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:37.978426  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:37.978497  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:38.014205  152982 cri.go:89] found id: ""
	I0826 12:12:38.014237  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.014248  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:38.014255  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:38.014326  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:38.048705  152982 cri.go:89] found id: ""
	I0826 12:12:38.048737  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.048748  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:38.048758  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:38.048824  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:38.085009  152982 cri.go:89] found id: ""
	I0826 12:12:38.085039  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.085050  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:38.085058  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:38.085147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:38.125923  152982 cri.go:89] found id: ""
	I0826 12:12:38.125949  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.125960  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:38.125968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:38.126038  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:38.161460  152982 cri.go:89] found id: ""
	I0826 12:12:38.161492  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.161504  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:38.161512  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:38.161584  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:38.194433  152982 cri.go:89] found id: ""
	I0826 12:12:38.194462  152982 logs.go:276] 0 containers: []
	W0826 12:12:38.194472  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:38.194481  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:38.194494  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.245809  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:38.245854  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:38.261100  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:38.261141  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:38.329187  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:38.329218  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:38.329237  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:38.416798  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:38.416844  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:40.962763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:40.976214  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:40.976287  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:41.010312  152982 cri.go:89] found id: ""
	I0826 12:12:41.010346  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.010356  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:41.010363  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:41.010422  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:41.051708  152982 cri.go:89] found id: ""
	I0826 12:12:41.051738  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.051746  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:41.051753  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:41.051818  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:41.087107  152982 cri.go:89] found id: ""
	I0826 12:12:41.087140  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.087152  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:41.087161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:41.087238  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:41.125099  152982 cri.go:89] found id: ""
	I0826 12:12:41.125132  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.125144  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:41.125153  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:41.125216  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:41.160192  152982 cri.go:89] found id: ""
	I0826 12:12:41.160220  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.160227  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:41.160234  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:41.160291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:41.193507  152982 cri.go:89] found id: ""
	I0826 12:12:41.193536  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.193548  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:41.193557  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:41.193650  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:41.235788  152982 cri.go:89] found id: ""
	I0826 12:12:41.235827  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.235835  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:41.235841  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:41.235897  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:41.271720  152982 cri.go:89] found id: ""
	I0826 12:12:41.271755  152982 logs.go:276] 0 containers: []
	W0826 12:12:41.271770  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:41.271780  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:41.271794  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:41.285694  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:41.285731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:41.351221  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:41.351245  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:41.351261  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:41.434748  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:41.434792  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:41.472446  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:41.472477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:38.704389  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.204525  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:41.752919  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:43.753710  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:42.123210  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.623786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:44.022222  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:44.036128  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:44.036201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:44.071142  152982 cri.go:89] found id: ""
	I0826 12:12:44.071177  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.071187  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:44.071196  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:44.071267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:44.105068  152982 cri.go:89] found id: ""
	I0826 12:12:44.105101  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.105110  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:44.105116  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:44.105184  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:44.140069  152982 cri.go:89] found id: ""
	I0826 12:12:44.140102  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.140113  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:44.140121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:44.140190  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:44.177686  152982 cri.go:89] found id: ""
	I0826 12:12:44.177724  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.177736  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:44.177744  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:44.177819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:44.214326  152982 cri.go:89] found id: ""
	I0826 12:12:44.214356  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.214364  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:44.214371  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:44.214426  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:44.251675  152982 cri.go:89] found id: ""
	I0826 12:12:44.251703  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.251711  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:44.251718  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:44.251776  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:44.303077  152982 cri.go:89] found id: ""
	I0826 12:12:44.303107  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.303116  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:44.303122  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:44.303183  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:44.355913  152982 cri.go:89] found id: ""
	I0826 12:12:44.355944  152982 logs.go:276] 0 containers: []
	W0826 12:12:44.355952  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:44.355962  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:44.355974  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:44.421610  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:44.421653  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:44.435567  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:44.435603  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:44.501406  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:44.501427  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:44.501440  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:44.582723  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:44.582763  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:43.703519  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.202958  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:46.253330  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:48.753043  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.122857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:49.621786  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:47.124026  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:47.139183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:47.139260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:47.175395  152982 cri.go:89] found id: ""
	I0826 12:12:47.175424  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.175440  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:47.175450  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:47.175514  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:47.214536  152982 cri.go:89] found id: ""
	I0826 12:12:47.214568  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.214580  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:47.214588  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:47.214655  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:47.255297  152982 cri.go:89] found id: ""
	I0826 12:12:47.255321  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.255329  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:47.255335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:47.255402  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:47.290638  152982 cri.go:89] found id: ""
	I0826 12:12:47.290666  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.290675  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:47.290681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:47.290736  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:47.327313  152982 cri.go:89] found id: ""
	I0826 12:12:47.327345  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.327352  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:47.327359  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:47.327425  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:47.366221  152982 cri.go:89] found id: ""
	I0826 12:12:47.366256  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.366264  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:47.366274  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:47.366331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:47.401043  152982 cri.go:89] found id: ""
	I0826 12:12:47.401077  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.401088  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:47.401095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:47.401166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:47.435800  152982 cri.go:89] found id: ""
	I0826 12:12:47.435837  152982 logs.go:276] 0 containers: []
	W0826 12:12:47.435848  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:47.435860  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:47.435881  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:47.487917  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:47.487955  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:47.501696  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:47.501731  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:47.569026  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:47.569053  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:47.569069  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:47.651002  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:47.651049  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:50.192329  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:50.213937  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:50.214017  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:50.253835  152982 cri.go:89] found id: ""
	I0826 12:12:50.253868  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.253879  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:50.253890  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:50.253957  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:50.296898  152982 cri.go:89] found id: ""
	I0826 12:12:50.296928  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.296939  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:50.296946  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:50.297016  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:50.350327  152982 cri.go:89] found id: ""
	I0826 12:12:50.350356  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.350365  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:50.350375  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:50.350443  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:50.385191  152982 cri.go:89] found id: ""
	I0826 12:12:50.385225  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.385236  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:50.385243  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:50.385309  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:50.418371  152982 cri.go:89] found id: ""
	I0826 12:12:50.418412  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.418423  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:50.418432  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:50.418505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:50.450924  152982 cri.go:89] found id: ""
	I0826 12:12:50.450956  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.450965  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:50.450972  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:50.451043  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:50.485695  152982 cri.go:89] found id: ""
	I0826 12:12:50.485728  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.485739  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:50.485748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:50.485819  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:50.519570  152982 cri.go:89] found id: ""
	I0826 12:12:50.519609  152982 logs.go:276] 0 containers: []
	W0826 12:12:50.519622  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:50.519633  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:50.519650  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:50.572959  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:50.573001  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:50.586794  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:50.586826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:50.654148  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:50.654180  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:50.654255  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:50.738067  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:50.738107  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:48.203018  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.205528  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.704054  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:50.758038  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.252772  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:52.121906  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:54.622553  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:53.281246  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:53.296023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:53.296103  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:53.333031  152982 cri.go:89] found id: ""
	I0826 12:12:53.333073  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.333092  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:53.333100  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:53.333171  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:53.367753  152982 cri.go:89] found id: ""
	I0826 12:12:53.367782  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.367791  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:53.367796  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:53.367849  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:53.403702  152982 cri.go:89] found id: ""
	I0826 12:12:53.403733  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.403745  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:53.403753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:53.403823  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:53.439911  152982 cri.go:89] found id: ""
	I0826 12:12:53.439939  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.439947  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:53.439953  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:53.440008  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:53.475053  152982 cri.go:89] found id: ""
	I0826 12:12:53.475079  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.475088  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:53.475094  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:53.475152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:53.509087  152982 cri.go:89] found id: ""
	I0826 12:12:53.509117  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.509128  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:53.509136  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:53.509207  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:53.546090  152982 cri.go:89] found id: ""
	I0826 12:12:53.546123  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.546133  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:53.546139  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:53.546195  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:53.581675  152982 cri.go:89] found id: ""
	I0826 12:12:53.581713  152982 logs.go:276] 0 containers: []
	W0826 12:12:53.581727  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:53.581741  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:53.581756  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:53.632866  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:53.632929  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:53.646045  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:53.646079  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:53.716768  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:53.716798  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:53.716814  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:53.799490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:53.799541  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.340389  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:56.353305  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:56.353377  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:56.389690  152982 cri.go:89] found id: ""
	I0826 12:12:56.389725  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.389733  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:56.389741  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:56.389797  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:56.423214  152982 cri.go:89] found id: ""
	I0826 12:12:56.423245  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.423253  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:56.423260  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:56.423315  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:56.459033  152982 cri.go:89] found id: ""
	I0826 12:12:56.459069  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.459077  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:56.459083  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:56.459141  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:56.494408  152982 cri.go:89] found id: ""
	I0826 12:12:56.494437  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.494446  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:56.494453  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:56.494507  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:56.533471  152982 cri.go:89] found id: ""
	I0826 12:12:56.533506  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.533517  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:56.533525  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:56.533595  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:56.572644  152982 cri.go:89] found id: ""
	I0826 12:12:56.572675  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.572685  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:56.572690  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:56.572769  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:56.610948  152982 cri.go:89] found id: ""
	I0826 12:12:56.610978  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.610989  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:56.610997  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:56.611161  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:56.651352  152982 cri.go:89] found id: ""
	I0826 12:12:56.651391  152982 logs.go:276] 0 containers: []
	W0826 12:12:56.651406  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:56.651419  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:56.651446  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:56.666627  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:56.666664  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0826 12:12:54.704640  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:56.704830  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:55.253572  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.754403  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:57.122603  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.623004  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	W0826 12:12:56.741054  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:56.741087  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:56.741106  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:56.818138  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:56.818194  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:56.858182  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:56.858216  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.412428  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:12:59.426340  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:12:59.426410  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:12:59.459975  152982 cri.go:89] found id: ""
	I0826 12:12:59.460011  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.460021  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:12:59.460027  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:12:59.460082  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:12:59.491890  152982 cri.go:89] found id: ""
	I0826 12:12:59.491918  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.491928  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:12:59.491934  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:12:59.491994  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:12:59.527284  152982 cri.go:89] found id: ""
	I0826 12:12:59.527318  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.527330  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:12:59.527339  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:12:59.527411  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:12:59.560996  152982 cri.go:89] found id: ""
	I0826 12:12:59.561027  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.561036  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:12:59.561042  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:12:59.561096  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:12:59.595827  152982 cri.go:89] found id: ""
	I0826 12:12:59.595858  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.595866  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:12:59.595882  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:12:59.595970  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:12:59.632943  152982 cri.go:89] found id: ""
	I0826 12:12:59.632981  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.632993  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:12:59.633001  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:12:59.633071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:12:59.669013  152982 cri.go:89] found id: ""
	I0826 12:12:59.669047  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.669057  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:12:59.669065  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:12:59.669139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:12:59.703286  152982 cri.go:89] found id: ""
	I0826 12:12:59.703320  152982 logs.go:276] 0 containers: []
	W0826 12:12:59.703331  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:12:59.703342  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:12:59.703359  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:12:59.756848  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:12:59.756882  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:12:59.770551  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:12:59.770592  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:12:59.842153  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:12:59.842176  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:12:59.842190  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:12:59.925190  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:12:59.925231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:12:59.203898  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.703960  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:12:59.755160  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.252684  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.253046  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:01.623605  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:04.122069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:02.464977  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:02.478901  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:02.478991  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:02.514845  152982 cri.go:89] found id: ""
	I0826 12:13:02.514890  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.514903  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:02.514912  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:02.514980  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:02.550867  152982 cri.go:89] found id: ""
	I0826 12:13:02.550899  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.550910  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:02.550918  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:02.550988  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:02.585494  152982 cri.go:89] found id: ""
	I0826 12:13:02.585522  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.585531  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:02.585537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:02.585617  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:02.623561  152982 cri.go:89] found id: ""
	I0826 12:13:02.623603  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.623619  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:02.623630  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:02.623696  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:02.660636  152982 cri.go:89] found id: ""
	I0826 12:13:02.660665  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.660675  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:02.660683  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:02.660760  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:02.696140  152982 cri.go:89] found id: ""
	I0826 12:13:02.696173  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.696184  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:02.696192  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:02.696260  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:02.735056  152982 cri.go:89] found id: ""
	I0826 12:13:02.735098  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.735111  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:02.735121  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:02.735201  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:02.770841  152982 cri.go:89] found id: ""
	I0826 12:13:02.770886  152982 logs.go:276] 0 containers: []
	W0826 12:13:02.770899  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:02.770911  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:02.770928  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:02.845458  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:02.845498  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:02.885537  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:02.885574  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:02.935507  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:02.935560  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:02.950010  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:02.950046  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:03.018963  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.520071  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:05.535473  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:05.535554  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:05.572890  152982 cri.go:89] found id: ""
	I0826 12:13:05.572923  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.572934  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:05.572942  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:05.573019  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:05.610469  152982 cri.go:89] found id: ""
	I0826 12:13:05.610503  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.610515  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:05.610522  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:05.610586  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:05.647446  152982 cri.go:89] found id: ""
	I0826 12:13:05.647480  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.647489  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:05.647495  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:05.647561  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:05.686619  152982 cri.go:89] found id: ""
	I0826 12:13:05.686660  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.686672  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:05.686681  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:05.686754  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:05.725893  152982 cri.go:89] found id: ""
	I0826 12:13:05.725927  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.725936  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:05.725943  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:05.726034  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:05.761052  152982 cri.go:89] found id: ""
	I0826 12:13:05.761081  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.761089  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:05.761095  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:05.761147  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:05.795336  152982 cri.go:89] found id: ""
	I0826 12:13:05.795367  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.795379  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:05.795387  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:05.795447  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:05.834397  152982 cri.go:89] found id: ""
	I0826 12:13:05.834441  152982 logs.go:276] 0 containers: []
	W0826 12:13:05.834449  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:05.834459  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:05.834472  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:05.847882  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:05.847919  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:05.921941  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:05.921965  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:05.921982  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:06.001380  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:06.001424  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:06.040519  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:06.040555  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:04.203987  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.704484  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.752615  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.753340  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:06.122654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.122742  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.123434  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:08.591761  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:08.604628  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:08.604724  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:08.639915  152982 cri.go:89] found id: ""
	I0826 12:13:08.639948  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.639957  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:08.639963  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:08.640025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:08.684479  152982 cri.go:89] found id: ""
	I0826 12:13:08.684513  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.684526  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:08.684535  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:08.684613  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:08.724083  152982 cri.go:89] found id: ""
	I0826 12:13:08.724112  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.724121  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:08.724127  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:08.724182  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:08.760781  152982 cri.go:89] found id: ""
	I0826 12:13:08.760830  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.760842  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:08.760851  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:08.760943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:08.795685  152982 cri.go:89] found id: ""
	I0826 12:13:08.795715  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.795723  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:08.795730  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:08.795786  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:08.832123  152982 cri.go:89] found id: ""
	I0826 12:13:08.832152  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.832161  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:08.832167  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:08.832227  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:08.869701  152982 cri.go:89] found id: ""
	I0826 12:13:08.869735  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.869752  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:08.869760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:08.869827  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:08.905399  152982 cri.go:89] found id: ""
	I0826 12:13:08.905444  152982 logs.go:276] 0 containers: []
	W0826 12:13:08.905455  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:08.905469  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:08.905485  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:08.956814  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:08.956857  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:08.971618  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:08.971656  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:09.039360  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:09.039389  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:09.039407  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:09.113464  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:09.113509  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:11.658989  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:11.671816  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:11.671898  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:11.707124  152982 cri.go:89] found id: ""
	I0826 12:13:11.707150  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.707158  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:11.707165  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:11.707230  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:09.203816  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.203914  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:10.757254  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:13.252482  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:12.624138  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.123672  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:11.743127  152982 cri.go:89] found id: ""
	I0826 12:13:11.743155  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.743163  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:11.743169  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:11.743249  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:11.777664  152982 cri.go:89] found id: ""
	I0826 12:13:11.777693  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.777701  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:11.777707  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:11.777766  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:11.811555  152982 cri.go:89] found id: ""
	I0826 12:13:11.811585  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.811593  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:11.811599  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:11.811658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:11.846187  152982 cri.go:89] found id: ""
	I0826 12:13:11.846216  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.846223  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:11.846229  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:11.846291  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:11.882261  152982 cri.go:89] found id: ""
	I0826 12:13:11.882292  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.882310  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:11.882318  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:11.882386  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:11.920538  152982 cri.go:89] found id: ""
	I0826 12:13:11.920572  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.920583  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:11.920590  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:11.920658  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:11.955402  152982 cri.go:89] found id: ""
	I0826 12:13:11.955435  152982 logs.go:276] 0 containers: []
	W0826 12:13:11.955446  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:11.955456  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:11.955473  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:12.007676  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:12.007723  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:12.021378  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:12.021417  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:12.087841  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:12.087868  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:12.087883  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:12.170948  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:12.170991  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:14.712383  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:14.724904  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:14.724982  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:14.759675  152982 cri.go:89] found id: ""
	I0826 12:13:14.759703  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.759711  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:14.759717  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:14.759784  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:14.794440  152982 cri.go:89] found id: ""
	I0826 12:13:14.794471  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.794480  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:14.794488  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:14.794542  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:14.832392  152982 cri.go:89] found id: ""
	I0826 12:13:14.832442  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.832452  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:14.832459  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:14.832524  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:14.870231  152982 cri.go:89] found id: ""
	I0826 12:13:14.870262  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.870273  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:14.870281  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:14.870339  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:14.909480  152982 cri.go:89] found id: ""
	I0826 12:13:14.909517  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.909529  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:14.909536  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:14.909596  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:14.950957  152982 cri.go:89] found id: ""
	I0826 12:13:14.950986  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.950997  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:14.951005  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:14.951071  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:14.995137  152982 cri.go:89] found id: ""
	I0826 12:13:14.995165  152982 logs.go:276] 0 containers: []
	W0826 12:13:14.995176  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:14.995183  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:14.995252  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:15.029939  152982 cri.go:89] found id: ""
	I0826 12:13:15.029969  152982 logs.go:276] 0 containers: []
	W0826 12:13:15.029978  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:15.029987  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:15.030000  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:15.106633  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:15.106675  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:15.152575  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:15.152613  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:15.205645  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:15.205689  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:15.220325  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:15.220369  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:15.289698  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:13.705307  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:16.203733  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:15.253098  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.253276  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.752313  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.621549  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:19.622504  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:17.790709  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:17.804332  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:17.804398  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:17.839735  152982 cri.go:89] found id: ""
	I0826 12:13:17.839779  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.839791  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:17.839803  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:17.839885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:17.875476  152982 cri.go:89] found id: ""
	I0826 12:13:17.875510  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.875521  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:17.875529  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:17.875606  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:17.911715  152982 cri.go:89] found id: ""
	I0826 12:13:17.911745  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.911753  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:17.911760  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:17.911822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:17.949059  152982 cri.go:89] found id: ""
	I0826 12:13:17.949094  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.949102  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:17.949109  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:17.949166  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:17.985319  152982 cri.go:89] found id: ""
	I0826 12:13:17.985365  152982 logs.go:276] 0 containers: []
	W0826 12:13:17.985376  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:17.985385  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:17.985481  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:18.019796  152982 cri.go:89] found id: ""
	I0826 12:13:18.019839  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.019858  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:18.019867  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:18.019931  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:18.053910  152982 cri.go:89] found id: ""
	I0826 12:13:18.053941  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.053953  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:18.053960  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:18.054039  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:18.089854  152982 cri.go:89] found id: ""
	I0826 12:13:18.089888  152982 logs.go:276] 0 containers: []
	W0826 12:13:18.089901  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:18.089917  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:18.089934  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:18.143026  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:18.143070  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.156710  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:18.156740  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:18.222894  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:18.222929  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:18.222946  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:18.298729  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:18.298777  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:20.837506  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:20.851070  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:20.851152  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:20.886253  152982 cri.go:89] found id: ""
	I0826 12:13:20.886289  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.886299  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:20.886308  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:20.886384  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:20.923773  152982 cri.go:89] found id: ""
	I0826 12:13:20.923803  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.923821  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:20.923827  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:20.923884  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:20.959117  152982 cri.go:89] found id: ""
	I0826 12:13:20.959151  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.959162  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:20.959170  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:20.959239  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:20.994088  152982 cri.go:89] found id: ""
	I0826 12:13:20.994121  152982 logs.go:276] 0 containers: []
	W0826 12:13:20.994131  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:20.994138  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:20.994203  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:21.031140  152982 cri.go:89] found id: ""
	I0826 12:13:21.031171  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.031183  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:21.031198  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:21.031267  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:21.064624  152982 cri.go:89] found id: ""
	I0826 12:13:21.064654  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.064666  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:21.064674  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:21.064743  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:21.100146  152982 cri.go:89] found id: ""
	I0826 12:13:21.100182  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.100190  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:21.100197  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:21.100268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:21.149001  152982 cri.go:89] found id: ""
	I0826 12:13:21.149031  152982 logs.go:276] 0 containers: []
	W0826 12:13:21.149040  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:21.149054  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:21.149074  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:21.229783  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:21.229809  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:21.229826  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:21.305579  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:21.305619  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:21.343856  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:21.343884  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:21.394183  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:21.394231  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:18.205132  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:20.704261  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:21.754167  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.253321  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:22.123356  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:24.621337  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:23.908368  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:23.922748  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:23.922840  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:23.964168  152982 cri.go:89] found id: ""
	I0826 12:13:23.964199  152982 logs.go:276] 0 containers: []
	W0826 12:13:23.964209  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:23.964218  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:23.964290  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:24.001156  152982 cri.go:89] found id: ""
	I0826 12:13:24.001186  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.001199  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:24.001204  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:24.001268  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:24.040001  152982 cri.go:89] found id: ""
	I0826 12:13:24.040037  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.040057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:24.040067  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:24.040139  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:24.076901  152982 cri.go:89] found id: ""
	I0826 12:13:24.076940  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.076948  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:24.076955  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:24.077028  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:24.129347  152982 cri.go:89] found id: ""
	I0826 12:13:24.129375  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.129383  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:24.129389  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:24.129457  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:24.169634  152982 cri.go:89] found id: ""
	I0826 12:13:24.169666  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.169678  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:24.169685  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:24.169740  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:24.206976  152982 cri.go:89] found id: ""
	I0826 12:13:24.207006  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.207015  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:24.207023  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:24.207092  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:24.243755  152982 cri.go:89] found id: ""
	I0826 12:13:24.243790  152982 logs.go:276] 0 containers: []
	W0826 12:13:24.243800  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:24.243812  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:24.243829  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:24.323085  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:24.323131  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:24.362404  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:24.362436  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:24.411863  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:24.411910  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:24.425742  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:24.425776  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:24.492510  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:23.203855  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:25.704793  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.753722  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:28.753791  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.622857  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:29.122053  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:26.993510  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:27.007233  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:27.007304  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:27.041360  152982 cri.go:89] found id: ""
	I0826 12:13:27.041392  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.041401  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:27.041407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:27.041470  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:27.076040  152982 cri.go:89] found id: ""
	I0826 12:13:27.076069  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.076080  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:27.076088  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:27.076160  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:27.114288  152982 cri.go:89] found id: ""
	I0826 12:13:27.114325  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.114336  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:27.114345  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:27.114418  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:27.148538  152982 cri.go:89] found id: ""
	I0826 12:13:27.148572  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.148582  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:27.148588  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:27.148665  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:27.182331  152982 cri.go:89] found id: ""
	I0826 12:13:27.182362  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.182373  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:27.182382  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:27.182453  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:27.218645  152982 cri.go:89] found id: ""
	I0826 12:13:27.218698  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.218710  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:27.218720  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:27.218798  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:27.254987  152982 cri.go:89] found id: ""
	I0826 12:13:27.255021  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.255031  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:27.255037  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:27.255097  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:27.289633  152982 cri.go:89] found id: ""
	I0826 12:13:27.289662  152982 logs.go:276] 0 containers: []
	W0826 12:13:27.289672  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:27.289683  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:27.289705  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:27.338387  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:27.338429  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:27.353764  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:27.353799  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:27.425833  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:27.425855  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:27.425870  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:27.507035  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:27.507078  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.047763  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:30.063283  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:30.063382  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:30.100161  152982 cri.go:89] found id: ""
	I0826 12:13:30.100194  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.100207  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:30.100215  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:30.100276  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:30.136507  152982 cri.go:89] found id: ""
	I0826 12:13:30.136542  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.136554  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:30.136561  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:30.136632  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:30.170023  152982 cri.go:89] found id: ""
	I0826 12:13:30.170058  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.170066  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:30.170071  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:30.170128  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:30.204979  152982 cri.go:89] found id: ""
	I0826 12:13:30.205022  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.205032  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:30.205062  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:30.205135  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:30.242407  152982 cri.go:89] found id: ""
	I0826 12:13:30.242442  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.242455  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:30.242463  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:30.242532  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:30.280569  152982 cri.go:89] found id: ""
	I0826 12:13:30.280607  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.280619  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:30.280627  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:30.280684  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:30.317846  152982 cri.go:89] found id: ""
	I0826 12:13:30.317882  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.317892  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:30.317906  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:30.318011  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:30.354637  152982 cri.go:89] found id: ""
	I0826 12:13:30.354675  152982 logs.go:276] 0 containers: []
	W0826 12:13:30.354686  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:30.354698  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:30.354715  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:30.434983  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:30.435032  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:30.474170  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:30.474214  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:30.541092  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:30.541133  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:30.566671  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:30.566707  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:30.659622  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:28.203031  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.204134  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:32.703767  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:30.754563  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.253557  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:31.122121  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.125357  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.622870  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:33.160831  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:33.174476  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:33.174556  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:33.213402  152982 cri.go:89] found id: ""
	I0826 12:13:33.213433  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.213441  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:33.213447  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:33.213505  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:33.251024  152982 cri.go:89] found id: ""
	I0826 12:13:33.251056  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.251064  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:33.251070  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:33.251134  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:33.288839  152982 cri.go:89] found id: ""
	I0826 12:13:33.288873  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.288882  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:33.288889  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:33.288961  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:33.324289  152982 cri.go:89] found id: ""
	I0826 12:13:33.324321  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.324329  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:33.324335  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:33.324404  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:33.358921  152982 cri.go:89] found id: ""
	I0826 12:13:33.358953  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.358961  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:33.358968  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:33.359025  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:33.394579  152982 cri.go:89] found id: ""
	I0826 12:13:33.394615  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.394623  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:33.394629  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:33.394700  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:33.429750  152982 cri.go:89] found id: ""
	I0826 12:13:33.429782  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.429794  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:33.429802  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:33.429863  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:33.465857  152982 cri.go:89] found id: ""
	I0826 12:13:33.465895  152982 logs.go:276] 0 containers: []
	W0826 12:13:33.465908  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:33.465921  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:33.465939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:33.506312  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:33.506344  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:33.557235  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:33.557279  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:33.570259  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:33.570293  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:33.638927  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:33.638952  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:33.638973  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:36.217153  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:36.230544  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:36.230630  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:36.283359  152982 cri.go:89] found id: ""
	I0826 12:13:36.283394  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.283405  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:36.283413  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:36.283486  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:36.327991  152982 cri.go:89] found id: ""
	I0826 12:13:36.328017  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.328026  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:36.328031  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:36.328095  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:36.380106  152982 cri.go:89] found id: ""
	I0826 12:13:36.380137  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.380147  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:36.380154  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:36.380212  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:36.415844  152982 cri.go:89] found id: ""
	I0826 12:13:36.415872  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.415880  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:36.415886  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:36.415939  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:36.451058  152982 cri.go:89] found id: ""
	I0826 12:13:36.451131  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.451158  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:36.451168  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:36.451235  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:36.485814  152982 cri.go:89] found id: ""
	I0826 12:13:36.485845  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.485856  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:36.485864  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:36.485943  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:36.520811  152982 cri.go:89] found id: ""
	I0826 12:13:36.520848  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.520865  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:36.520876  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:36.520952  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:36.557835  152982 cri.go:89] found id: ""
	I0826 12:13:36.557866  152982 logs.go:276] 0 containers: []
	W0826 12:13:36.557877  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:36.557897  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:36.557915  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:36.609551  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:36.609594  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:36.624424  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:36.624453  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:36.697267  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:36.697294  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:36.697312  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:34.704284  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.203717  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:35.752752  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:38.253700  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:37.622907  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.121820  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:36.781810  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:36.781862  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.326306  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:39.340161  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:39.340229  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:39.373614  152982 cri.go:89] found id: ""
	I0826 12:13:39.373646  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.373655  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:39.373664  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:39.373732  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:39.408021  152982 cri.go:89] found id: ""
	I0826 12:13:39.408059  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.408067  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:39.408073  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:39.408127  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:39.450503  152982 cri.go:89] found id: ""
	I0826 12:13:39.450531  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.450541  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:39.450549  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:39.450624  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:39.487553  152982 cri.go:89] found id: ""
	I0826 12:13:39.487585  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.487596  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:39.487625  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:39.487695  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:39.524701  152982 cri.go:89] found id: ""
	I0826 12:13:39.524734  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.524745  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:39.524753  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:39.524822  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:39.557863  152982 cri.go:89] found id: ""
	I0826 12:13:39.557893  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.557903  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:39.557911  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:39.557979  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:39.593456  152982 cri.go:89] found id: ""
	I0826 12:13:39.593486  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.593496  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:39.593504  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:39.593577  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:39.628444  152982 cri.go:89] found id: ""
	I0826 12:13:39.628472  152982 logs.go:276] 0 containers: []
	W0826 12:13:39.628481  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:39.628490  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:39.628503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:39.668929  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:39.668967  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:39.724948  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:39.725003  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:39.740014  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:39.740060  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:39.814786  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:39.814811  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:39.814828  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:39.704050  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:41.704769  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:40.752827  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.753423  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.122285  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.622043  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:42.393781  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:42.407529  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:42.407620  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:42.444273  152982 cri.go:89] found id: ""
	I0826 12:13:42.444305  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.444314  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:42.444321  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:42.444389  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:42.478683  152982 cri.go:89] found id: ""
	I0826 12:13:42.478724  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.478734  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:42.478741  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:42.478803  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:42.520650  152982 cri.go:89] found id: ""
	I0826 12:13:42.520684  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.520708  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:42.520715  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:42.520774  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:42.558610  152982 cri.go:89] found id: ""
	I0826 12:13:42.558656  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.558667  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:42.558677  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:42.558750  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:42.593960  152982 cri.go:89] found id: ""
	I0826 12:13:42.593991  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.593999  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:42.594006  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:42.594064  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:42.628257  152982 cri.go:89] found id: ""
	I0826 12:13:42.628284  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.628294  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:42.628300  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:42.628372  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:42.669894  152982 cri.go:89] found id: ""
	I0826 12:13:42.669933  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.669946  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:42.669956  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:42.670029  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:42.707893  152982 cri.go:89] found id: ""
	I0826 12:13:42.707923  152982 logs.go:276] 0 containers: []
	W0826 12:13:42.707934  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:42.707946  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:42.707962  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:42.760778  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:42.760823  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:42.773718  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:42.773753  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:42.855780  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:42.855813  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:42.855831  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:42.934872  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:42.934925  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:45.473505  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:45.488485  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:45.488582  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:45.524355  152982 cri.go:89] found id: ""
	I0826 12:13:45.524387  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.524398  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:45.524407  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:45.524474  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:45.563731  152982 cri.go:89] found id: ""
	I0826 12:13:45.563758  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.563767  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:45.563772  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:45.563832  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:45.595876  152982 cri.go:89] found id: ""
	I0826 12:13:45.595910  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.595918  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:45.595924  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:45.595977  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:45.629212  152982 cri.go:89] found id: ""
	I0826 12:13:45.629246  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.629256  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:45.629262  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:45.629316  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:45.662718  152982 cri.go:89] found id: ""
	I0826 12:13:45.662748  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.662759  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:45.662766  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:45.662851  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:45.697540  152982 cri.go:89] found id: ""
	I0826 12:13:45.697573  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.697585  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:45.697598  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:45.697670  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:45.738012  152982 cri.go:89] found id: ""
	I0826 12:13:45.738054  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.738067  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:45.738077  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:45.738174  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:45.778322  152982 cri.go:89] found id: ""
	I0826 12:13:45.778352  152982 logs.go:276] 0 containers: []
	W0826 12:13:45.778364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:45.778376  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:45.778395  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:45.830530  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:45.830570  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:45.845289  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:45.845335  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:45.918490  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:45.918514  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:45.918528  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:45.998762  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:45.998806  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:44.204527  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.204789  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:44.753605  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.754396  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.255176  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:46.622584  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:49.122691  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:48.540076  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:48.554537  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:48.554616  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:48.589750  152982 cri.go:89] found id: ""
	I0826 12:13:48.589783  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.589792  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:48.589799  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:48.589866  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.628496  152982 cri.go:89] found id: ""
	I0826 12:13:48.628530  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.628540  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:48.628557  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:48.628635  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:48.670630  152982 cri.go:89] found id: ""
	I0826 12:13:48.670667  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.670678  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:48.670686  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:48.670756  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:48.707510  152982 cri.go:89] found id: ""
	I0826 12:13:48.707543  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.707564  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:48.707572  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:48.707642  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:48.752189  152982 cri.go:89] found id: ""
	I0826 12:13:48.752222  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.752231  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:48.752237  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:48.752306  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:48.788294  152982 cri.go:89] found id: ""
	I0826 12:13:48.788332  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.788356  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:48.788364  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:48.788439  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:48.822728  152982 cri.go:89] found id: ""
	I0826 12:13:48.822755  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.822765  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:48.822771  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:48.822850  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:48.859237  152982 cri.go:89] found id: ""
	I0826 12:13:48.859270  152982 logs.go:276] 0 containers: []
	W0826 12:13:48.859280  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:48.859293  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:48.859310  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:48.944271  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:48.944322  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:48.983438  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:48.983477  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:49.036463  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:49.036511  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:49.051081  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:49.051123  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:49.127953  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:51.629023  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:51.643644  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:51.643728  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:51.684273  152982 cri.go:89] found id: ""
	I0826 12:13:51.684310  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.684323  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:51.684331  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:51.684401  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:48.703794  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:50.703872  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:52.705329  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.753669  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.252960  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.623221  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:54.121867  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:51.720561  152982 cri.go:89] found id: ""
	I0826 12:13:51.720600  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.720610  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:51.720616  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:51.720690  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:51.758023  152982 cri.go:89] found id: ""
	I0826 12:13:51.758049  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.758057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:51.758063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:51.758123  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:51.797029  152982 cri.go:89] found id: ""
	I0826 12:13:51.797063  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.797075  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:51.797082  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:51.797150  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:51.832002  152982 cri.go:89] found id: ""
	I0826 12:13:51.832032  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.832043  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:51.832051  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:51.832122  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:51.867042  152982 cri.go:89] found id: ""
	I0826 12:13:51.867074  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.867083  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:51.867090  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:51.867155  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:51.904887  152982 cri.go:89] found id: ""
	I0826 12:13:51.904919  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.904931  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:51.904938  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:51.905005  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:51.940628  152982 cri.go:89] found id: ""
	I0826 12:13:51.940662  152982 logs.go:276] 0 containers: []
	W0826 12:13:51.940674  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:51.940686  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:51.940703  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:51.979988  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:51.980021  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:52.033297  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:52.033338  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:52.047004  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:52.047039  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:52.126136  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:52.126163  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:52.126176  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:54.711457  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:54.726419  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:54.726510  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:54.773253  152982 cri.go:89] found id: ""
	I0826 12:13:54.773290  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.773304  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:54.773324  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:54.773397  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:54.812175  152982 cri.go:89] found id: ""
	I0826 12:13:54.812211  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.812232  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:54.812239  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:54.812298  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:54.848673  152982 cri.go:89] found id: ""
	I0826 12:13:54.848702  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.848710  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:54.848717  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:54.848782  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:54.884211  152982 cri.go:89] found id: ""
	I0826 12:13:54.884239  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.884252  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:54.884259  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:54.884329  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:54.925279  152982 cri.go:89] found id: ""
	I0826 12:13:54.925312  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.925323  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:54.925331  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:54.925406  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:54.961004  152982 cri.go:89] found id: ""
	I0826 12:13:54.961035  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.961043  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:54.961050  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:54.961114  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:54.998689  152982 cri.go:89] found id: ""
	I0826 12:13:54.998720  152982 logs.go:276] 0 containers: []
	W0826 12:13:54.998730  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:54.998737  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:54.998810  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:55.033540  152982 cri.go:89] found id: ""
	I0826 12:13:55.033671  152982 logs.go:276] 0 containers: []
	W0826 12:13:55.033683  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:55.033696  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:55.033713  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:55.082966  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:55.083006  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:55.096472  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:55.096503  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:55.166868  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:55.166899  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:55.166917  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:13:55.260596  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:55.260637  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:55.206106  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.704214  152550 pod_ready.go:103] pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.253114  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.254749  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:56.122385  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:58.124183  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:00.622721  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:13:57.804727  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:13:57.818098  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:13:57.818188  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:13:57.852777  152982 cri.go:89] found id: ""
	I0826 12:13:57.852819  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.852832  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:13:57.852841  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:13:57.852906  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:13:57.888778  152982 cri.go:89] found id: ""
	I0826 12:13:57.888815  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.888832  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:13:57.888840  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:13:57.888924  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:13:57.927398  152982 cri.go:89] found id: ""
	I0826 12:13:57.927432  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.927444  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:13:57.927452  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:13:57.927527  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:13:57.965373  152982 cri.go:89] found id: ""
	I0826 12:13:57.965402  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.965420  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:13:57.965425  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:13:57.965488  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:13:57.999334  152982 cri.go:89] found id: ""
	I0826 12:13:57.999366  152982 logs.go:276] 0 containers: []
	W0826 12:13:57.999374  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:13:57.999380  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:13:57.999441  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:13:58.035268  152982 cri.go:89] found id: ""
	I0826 12:13:58.035299  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.035308  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:13:58.035313  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:13:58.035373  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:13:58.070055  152982 cri.go:89] found id: ""
	I0826 12:13:58.070088  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.070099  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:13:58.070107  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:13:58.070176  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:13:58.104845  152982 cri.go:89] found id: ""
	I0826 12:13:58.104882  152982 logs.go:276] 0 containers: []
	W0826 12:13:58.104893  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:13:58.104906  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:13:58.104923  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:13:58.149392  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:13:58.149427  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:13:58.201310  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:13:58.201345  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:13:58.217027  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:13:58.217067  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:13:58.301347  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.301372  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:13:58.301389  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:00.881924  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:00.897716  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:14:00.897804  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:14:00.934959  152982 cri.go:89] found id: ""
	I0826 12:14:00.934993  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.935005  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:14:00.935013  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:14:00.935086  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:14:00.969225  152982 cri.go:89] found id: ""
	I0826 12:14:00.969257  152982 logs.go:276] 0 containers: []
	W0826 12:14:00.969266  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:14:00.969272  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:14:00.969344  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:14:01.004010  152982 cri.go:89] found id: ""
	I0826 12:14:01.004047  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.004057  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:14:01.004063  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:14:01.004136  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:14:01.039659  152982 cri.go:89] found id: ""
	I0826 12:14:01.039689  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.039697  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:14:01.039704  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:14:01.039758  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:14:01.073234  152982 cri.go:89] found id: ""
	I0826 12:14:01.073266  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.073278  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:14:01.073293  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:14:01.073370  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:14:01.111187  152982 cri.go:89] found id: ""
	I0826 12:14:01.111229  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.111243  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:14:01.111261  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:14:01.111331  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:14:01.145754  152982 cri.go:89] found id: ""
	I0826 12:14:01.145791  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.145803  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:14:01.145811  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:14:01.145885  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:14:01.182342  152982 cri.go:89] found id: ""
	I0826 12:14:01.182386  152982 logs.go:276] 0 containers: []
	W0826 12:14:01.182398  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:14:01.182412  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:14:01.182434  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:14:01.266710  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:14:01.266754  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0826 12:14:01.305346  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:14:01.305385  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:14:01.356704  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:14:01.356745  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:14:01.370117  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:14:01.370149  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:14:01.440661  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:13:58.198044  152550 pod_ready.go:82] duration metric: took 4m0.000989551s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" ...
	E0826 12:13:58.198094  152550 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cw5t8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:13:58.198117  152550 pod_ready.go:39] duration metric: took 4m12.634931094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:13:58.198155  152550 kubeadm.go:597] duration metric: took 4m20.008849713s to restartPrimaryControlPlane
	W0826 12:13:58.198303  152550 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:13:58.198455  152550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:00.756478  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.253496  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:03.941691  152982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:03.956386  152982 kubeadm.go:597] duration metric: took 4m3.440941217s to restartPrimaryControlPlane
	W0826 12:14:03.956466  152982 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:03.956493  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:04.426489  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:04.441881  152982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:04.452877  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:04.463304  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:04.463332  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:04.463380  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:04.473208  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:04.473290  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:04.483666  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:04.494051  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:04.494177  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:04.504320  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.514099  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:04.514174  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:04.524235  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:04.533899  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:04.533984  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:04.544851  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:04.618397  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:14:04.618498  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:04.760383  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:04.760547  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:04.760690  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:14:04.953284  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:02.622852  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:05.122408  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:04.955371  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:04.955481  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:04.955563  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:04.955664  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:04.955738  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:04.955850  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:04.955953  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:04.956047  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:04.956133  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:04.956239  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:04.956306  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:04.956366  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:04.956455  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:05.401019  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:05.543601  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:05.641242  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:05.716524  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:05.737543  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:05.739428  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:05.739530  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:05.887203  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:05.889144  152982 out.go:235]   - Booting up control plane ...
	I0826 12:14:05.889288  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:05.891248  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:05.892518  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:05.894610  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:05.899134  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:14:05.753455  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.754033  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:07.622166  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:09.623006  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:10.253568  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.255058  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:12.122796  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.622774  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:14.753807  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.253632  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.254808  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:17.123304  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:19.622567  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.257450  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.752912  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:21.623069  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:23.624561  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.253685  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.752880  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:26.122470  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:28.623195  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:29.414342  152550 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.215853526s)
	I0826 12:14:29.414450  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:29.436730  152550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:14:29.449421  152550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:14:29.462320  152550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:14:29.462349  152550 kubeadm.go:157] found existing configuration files:
	
	I0826 12:14:29.462411  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:14:29.473119  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:14:29.473189  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:14:29.493795  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:14:29.516473  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:14:29.516563  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:14:29.528887  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.537934  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:14:29.538011  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:14:29.548384  152550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:14:29.557588  152550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:14:29.557659  152550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:14:29.567544  152550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:14:29.611274  152550 kubeadm.go:310] W0826 12:14:29.589660    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.612346  152550 kubeadm.go:310] W0826 12:14:29.590990    2810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:14:29.731352  152550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:14:30.755803  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.252679  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:31.123036  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:33.623654  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:35.623993  153366 pod_ready.go:103] pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:38.120098  152550 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:14:38.120187  152550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:14:38.120283  152550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:14:38.120428  152550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:14:38.120548  152550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:14:38.120643  152550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:14:38.122417  152550 out.go:235]   - Generating certificates and keys ...
	I0826 12:14:38.122519  152550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:14:38.122590  152550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:14:38.122681  152550 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:14:38.122766  152550 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:14:38.122884  152550 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:14:38.122960  152550 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:14:38.123047  152550 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:14:38.123146  152550 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:14:38.123242  152550 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:14:38.123316  152550 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:14:38.123350  152550 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:14:38.123394  152550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:14:38.123481  152550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:14:38.123531  152550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:14:38.123602  152550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:14:38.123656  152550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:14:38.123702  152550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:14:38.123770  152550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:14:38.123830  152550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:14:38.126005  152550 out.go:235]   - Booting up control plane ...
	I0826 12:14:38.126111  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:14:38.126209  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:14:38.126293  152550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:14:38.126433  152550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:14:38.126541  152550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:14:38.126619  152550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:14:38.126796  152550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:14:38.126975  152550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:14:38.127064  152550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001663066s
	I0826 12:14:38.127156  152550 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:14:38.127239  152550 kubeadm.go:310] [api-check] The API server is healthy after 4.502197821s
	I0826 12:14:38.127376  152550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:14:38.127527  152550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:14:38.127622  152550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:14:38.127799  152550 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-923586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:14:38.127882  152550 kubeadm.go:310] [bootstrap-token] Using token: uk5nes.r9l047sx2ciq7ja8
	I0826 12:14:38.129135  152550 out.go:235]   - Configuring RBAC rules ...
	I0826 12:14:38.129255  152550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:14:38.129363  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:14:38.129493  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:14:38.129668  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:14:38.129810  152550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:14:38.129908  152550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:14:38.130016  152550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:14:38.130071  152550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:14:38.130114  152550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:14:38.130120  152550 kubeadm.go:310] 
	I0826 12:14:38.130173  152550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:14:38.130178  152550 kubeadm.go:310] 
	I0826 12:14:38.130239  152550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:14:38.130249  152550 kubeadm.go:310] 
	I0826 12:14:38.130269  152550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:14:38.130340  152550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:14:38.130414  152550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:14:38.130424  152550 kubeadm.go:310] 
	I0826 12:14:38.130501  152550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:14:38.130515  152550 kubeadm.go:310] 
	I0826 12:14:38.130583  152550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:14:38.130595  152550 kubeadm.go:310] 
	I0826 12:14:38.130676  152550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:14:38.130774  152550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:14:38.130889  152550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:14:38.130898  152550 kubeadm.go:310] 
	I0826 12:14:38.130984  152550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:14:38.131067  152550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:14:38.131086  152550 kubeadm.go:310] 
	I0826 12:14:38.131158  152550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131276  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:14:38.131297  152550 kubeadm.go:310] 	--control-plane 
	I0826 12:14:38.131301  152550 kubeadm.go:310] 
	I0826 12:14:38.131407  152550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:14:38.131419  152550 kubeadm.go:310] 
	I0826 12:14:38.131518  152550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uk5nes.r9l047sx2ciq7ja8 \
	I0826 12:14:38.131634  152550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:14:38.131651  152550 cni.go:84] Creating CNI manager for ""
	I0826 12:14:38.131664  152550 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:14:38.133846  152550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:14:35.752863  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.752967  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:37.116222  153366 pod_ready.go:82] duration metric: took 4m0.000438014s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" ...
	E0826 12:14:37.116261  153366 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-spxx8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:14:37.116289  153366 pod_ready.go:39] duration metric: took 4m10.542468189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:37.116344  153366 kubeadm.go:597] duration metric: took 4m19.458712933s to restartPrimaryControlPlane
	W0826 12:14:37.116458  153366 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:14:37.116493  153366 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:14:38.135291  152550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:14:38.146512  152550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:14:38.165564  152550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:14:38.165694  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.165744  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-923586 minikube.k8s.io/updated_at=2024_08_26T12_14_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=embed-certs-923586 minikube.k8s.io/primary=true
	I0826 12:14:38.409452  152550 ops.go:34] apiserver oom_adj: -16
	I0826 12:14:38.409559  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:38.910300  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.410434  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:39.909691  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.410601  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:40.910375  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.410502  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:41.909663  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.409954  152550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:14:42.515793  152550 kubeadm.go:1113] duration metric: took 4.350161994s to wait for elevateKubeSystemPrivileges
	I0826 12:14:42.515834  152550 kubeadm.go:394] duration metric: took 5m4.371327443s to StartCluster
	I0826 12:14:42.515878  152550 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.515970  152550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:14:42.517781  152550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:14:42.518064  152550 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:14:42.518189  152550 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:14:42.518281  152550 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-923586"
	I0826 12:14:42.518296  152550 addons.go:69] Setting default-storageclass=true in profile "embed-certs-923586"
	I0826 12:14:42.518309  152550 config.go:182] Loaded profile config "embed-certs-923586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:14:42.518339  152550 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-923586"
	W0826 12:14:42.518352  152550 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:14:42.518362  152550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-923586"
	I0826 12:14:42.518383  152550 addons.go:69] Setting metrics-server=true in profile "embed-certs-923586"
	I0826 12:14:42.518405  152550 addons.go:234] Setting addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:42.518409  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	W0826 12:14:42.518418  152550 addons.go:243] addon metrics-server should already be in state true
	I0826 12:14:42.518446  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.518852  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518865  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518829  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518890  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.518905  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.519968  152550 out.go:177] * Verifying Kubernetes components...
	I0826 12:14:42.521761  152550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:14:42.537559  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0826 12:14:42.538127  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.538827  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.538891  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.539336  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.539636  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.540538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0826 12:14:42.540644  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0826 12:14:42.541179  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541244  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.541681  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541695  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.541834  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.541842  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.542936  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.542979  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.543441  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543490  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543551  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.543577  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.543637  152550 addons.go:234] Setting addon default-storageclass=true in "embed-certs-923586"
	W0826 12:14:42.543663  152550 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:14:42.543700  152550 host.go:66] Checking if "embed-certs-923586" exists ...
	I0826 12:14:42.544040  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.544067  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.561871  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0826 12:14:42.562432  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.562957  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.562971  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.563394  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.563689  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.565675  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.565857  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0826 12:14:42.565980  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0826 12:14:42.566268  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566352  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.566799  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.566815  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567209  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567364  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.567386  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.567775  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.567779  152550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:14:42.567855  152550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:14:42.567903  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.568183  152550 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:14:42.569717  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.569832  152550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.569854  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:14:42.569876  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.571655  152550 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:14:42.572951  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.572975  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:14:42.572988  152550 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:14:42.573009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.573393  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.573434  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.573818  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.574020  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.574160  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.574454  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.576356  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.576762  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.576782  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.577099  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.577293  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.577430  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.577564  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.586538  152550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0826 12:14:42.587087  152550 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:14:42.587574  152550 main.go:141] libmachine: Using API Version  1
	I0826 12:14:42.587590  152550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:14:42.587849  152550 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:14:42.588001  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetState
	I0826 12:14:42.589835  152550 main.go:141] libmachine: (embed-certs-923586) Calling .DriverName
	I0826 12:14:42.590061  152550 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.590075  152550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:14:42.590089  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHHostname
	I0826 12:14:42.592573  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.592861  152550 main.go:141] libmachine: (embed-certs-923586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:e9:ed", ip: ""} in network mk-embed-certs-923586: {Iface:virbr1 ExpiryTime:2024-08-26 13:09:22 +0000 UTC Type:0 Mac:52:54:00:2e:e9:ed Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:embed-certs-923586 Clientid:01:52:54:00:2e:e9:ed}
	I0826 12:14:42.592952  152550 main.go:141] libmachine: (embed-certs-923586) DBG | domain embed-certs-923586 has defined IP address 192.168.39.6 and MAC address 52:54:00:2e:e9:ed in network mk-embed-certs-923586
	I0826 12:14:42.593269  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHPort
	I0826 12:14:42.593437  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHKeyPath
	I0826 12:14:42.593541  152550 main.go:141] libmachine: (embed-certs-923586) Calling .GetSSHUsername
	I0826 12:14:42.593637  152550 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/embed-certs-923586/id_rsa Username:docker}
	I0826 12:14:42.772651  152550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:14:42.795921  152550 node_ready.go:35] waiting up to 6m0s for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831853  152550 node_ready.go:49] node "embed-certs-923586" has status "Ready":"True"
	I0826 12:14:42.831881  152550 node_ready.go:38] duration metric: took 35.920093ms for node "embed-certs-923586" to be "Ready" ...
	I0826 12:14:42.831893  152550 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:42.856949  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:42.924562  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:14:42.940640  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:14:42.940669  152550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:14:42.958680  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:14:42.975446  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:14:42.975481  152550 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:14:43.037862  152550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:43.037891  152550 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:14:43.105738  152550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:14:44.054921  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130312138s)
	I0826 12:14:44.054995  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055009  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055025  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096305238s)
	I0826 12:14:44.055070  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055087  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055330  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055394  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055408  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055416  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055423  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055444  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055395  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055498  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055512  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.055520  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.055719  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055724  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.055734  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055858  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.055898  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.055923  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.075068  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.075100  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.075404  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.075424  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478321  152550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.372540463s)
	I0826 12:14:44.478382  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478402  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.478806  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.478864  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.478876  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.478891  152550 main.go:141] libmachine: Making call to close driver server
	I0826 12:14:44.478904  152550 main.go:141] libmachine: (embed-certs-923586) Calling .Close
	I0826 12:14:44.479161  152550 main.go:141] libmachine: (embed-certs-923586) DBG | Closing plugin on server side
	I0826 12:14:44.479161  152550 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:14:44.479189  152550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:14:44.479205  152550 addons.go:475] Verifying addon metrics-server=true in "embed-certs-923586"
	I0826 12:14:44.482190  152550 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:14:40.254480  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:42.753499  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:45.900198  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:14:45.901204  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:45.901550  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:44.483577  152550 addons.go:510] duration metric: took 1.965385921s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:14:44.876221  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:44.876253  152550 pod_ready.go:82] duration metric: took 2.019275302s for pod "coredns-6f6b679f8f-5tpbm" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.876270  152550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883514  152550 pod_ready.go:93] pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.883542  152550 pod_ready.go:82] duration metric: took 1.007263784s for pod "coredns-6f6b679f8f-dhm6d" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.883553  152550 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890724  152550 pod_ready.go:93] pod "etcd-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:45.890750  152550 pod_ready.go:82] duration metric: took 7.190212ms for pod "etcd-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:45.890760  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:44.754815  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.252702  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:49.254411  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:47.897138  152550 pod_ready.go:103] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:48.897502  152550 pod_ready.go:93] pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:48.897529  152550 pod_ready.go:82] duration metric: took 3.006762275s for pod "kube-apiserver-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:48.897541  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905832  152550 pod_ready.go:93] pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.905858  152550 pod_ready.go:82] duration metric: took 2.008310051s for pod "kube-controller-manager-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.905870  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912639  152550 pod_ready.go:93] pod "kube-proxy-xnv2b" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.912679  152550 pod_ready.go:82] duration metric: took 6.793285ms for pod "kube-proxy-xnv2b" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.912694  152550 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918794  152550 pod_ready.go:93] pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace has status "Ready":"True"
	I0826 12:14:50.918819  152550 pod_ready.go:82] duration metric: took 6.117525ms for pod "kube-scheduler-embed-certs-923586" in "kube-system" namespace to be "Ready" ...
	I0826 12:14:50.918826  152550 pod_ready.go:39] duration metric: took 8.086922463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:14:50.918867  152550 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:14:50.918928  152550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:14:50.936095  152550 api_server.go:72] duration metric: took 8.41799252s to wait for apiserver process to appear ...
	I0826 12:14:50.936126  152550 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:14:50.936155  152550 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0826 12:14:50.941142  152550 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0826 12:14:50.942612  152550 api_server.go:141] control plane version: v1.31.0
	I0826 12:14:50.942653  152550 api_server.go:131] duration metric: took 6.519342ms to wait for apiserver health ...
	I0826 12:14:50.942664  152550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:14:50.947646  152550 system_pods.go:59] 9 kube-system pods found
	I0826 12:14:50.947675  152550 system_pods.go:61] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:50.947680  152550 system_pods.go:61] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:50.947684  152550 system_pods.go:61] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:50.947688  152550 system_pods.go:61] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:50.947691  152550 system_pods.go:61] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:50.947694  152550 system_pods.go:61] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:50.947699  152550 system_pods.go:61] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:50.947705  152550 system_pods.go:61] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:50.947709  152550 system_pods.go:61] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:50.947717  152550 system_pods.go:74] duration metric: took 5.046771ms to wait for pod list to return data ...
	I0826 12:14:50.947723  152550 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:14:50.950716  152550 default_sa.go:45] found service account: "default"
	I0826 12:14:50.950744  152550 default_sa.go:55] duration metric: took 3.014513ms for default service account to be created ...
	I0826 12:14:50.950756  152550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:14:51.063812  152550 system_pods.go:86] 9 kube-system pods found
	I0826 12:14:51.063849  152550 system_pods.go:89] "coredns-6f6b679f8f-5tpbm" [3cc20f31-6d6c-4104-93c3-29c1b94de93c] Running
	I0826 12:14:51.063858  152550 system_pods.go:89] "coredns-6f6b679f8f-dhm6d" [a6a9c3c6-91e8-4232-8cd6-16233be0350f] Running
	I0826 12:14:51.063864  152550 system_pods.go:89] "etcd-embed-certs-923586" [3ffae2e2-716f-417c-a998-cdbb2bdb47ab] Running
	I0826 12:14:51.063869  152550 system_pods.go:89] "kube-apiserver-embed-certs-923586" [e06adc6b-d78c-4226-a9cc-491c8a642f5c] Running
	I0826 12:14:51.063875  152550 system_pods.go:89] "kube-controller-manager-embed-certs-923586" [82fad257-8bbb-4b67-b90d-e65bac3e0662] Running
	I0826 12:14:51.063880  152550 system_pods.go:89] "kube-proxy-xnv2b" [b380ae46-11a4-44f2-99b1-428fa493fe99] Running
	I0826 12:14:51.063886  152550 system_pods.go:89] "kube-scheduler-embed-certs-923586" [8906d6f9-4227-4e04-9e95-90049862e613] Running
	I0826 12:14:51.063894  152550 system_pods.go:89] "metrics-server-6867b74b74-k6mkf" [45ba4fff-060e-4b04-b86c-8e25918b739e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:14:51.063901  152550 system_pods.go:89] "storage-provisioner" [3acbf90c-c596-49df-8b5c-2a43f90d2008] Running
	I0826 12:14:51.063914  152550 system_pods.go:126] duration metric: took 113.151196ms to wait for k8s-apps to be running ...
	I0826 12:14:51.063925  152550 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:14:51.063978  152550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:14:51.079783  152550 system_svc.go:56] duration metric: took 15.845401ms WaitForService to wait for kubelet
	I0826 12:14:51.079821  152550 kubeadm.go:582] duration metric: took 8.56172531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:14:51.079848  152550 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:14:51.262166  152550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:14:51.262194  152550 node_conditions.go:123] node cpu capacity is 2
	I0826 12:14:51.262233  152550 node_conditions.go:105] duration metric: took 182.377973ms to run NodePressure ...
	I0826 12:14:51.262248  152550 start.go:241] waiting for startup goroutines ...
	I0826 12:14:51.262258  152550 start.go:246] waiting for cluster config update ...
	I0826 12:14:51.262272  152550 start.go:255] writing updated cluster config ...
	I0826 12:14:51.262587  152550 ssh_runner.go:195] Run: rm -f paused
	I0826 12:14:51.317881  152550 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:14:51.319950  152550 out.go:177] * Done! kubectl is now configured to use "embed-certs-923586" cluster and "default" namespace by default
	I0826 12:14:50.901903  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:14:50.902179  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:14:51.256756  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:53.755801  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:56.253848  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:14:58.254315  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:00.902494  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:00.902754  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:03.257214  153366 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140694693s)
	I0826 12:15:03.257298  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:03.273530  153366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:03.284370  153366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:03.294199  153366 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:03.294221  153366 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:03.294270  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0826 12:15:03.303856  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:03.303938  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:03.313935  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0826 12:15:03.323395  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:03.323477  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:03.333728  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.343369  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:03.343452  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:03.353456  153366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0826 12:15:03.363384  153366 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:03.363472  153366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:03.373738  153366 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:03.422068  153366 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:03.422173  153366 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:03.535516  153366 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:03.535649  153366 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:03.535775  153366 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:03.550873  153366 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:03.552861  153366 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:03.552969  153366 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:03.553038  153366 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:03.553138  153366 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:03.553218  153366 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:03.553319  153366 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:03.553385  153366 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:03.553462  153366 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:03.553536  153366 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:03.553674  153366 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:03.553810  153366 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:03.553854  153366 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:03.553906  153366 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:03.650986  153366 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:03.737989  153366 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:03.981919  153366 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:04.322809  153366 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:04.378495  153366 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:04.379108  153366 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:04.382061  153366 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:00.753091  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:02.753181  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:04.384093  153366 out.go:235]   - Booting up control plane ...
	I0826 12:15:04.384215  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:04.384313  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:04.384401  153366 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:04.405533  153366 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:04.411925  153366 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:04.411998  153366 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:04.548438  153366 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:04.548626  153366 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:05.049451  153366 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.31099ms
	I0826 12:15:05.049526  153366 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:05.253970  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:07.753555  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.051568  153366 kubeadm.go:310] [api-check] The API server is healthy after 5.001973036s
	I0826 12:15:10.066691  153366 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:10.086381  153366 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:10.122144  153366 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:10.122349  153366 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-697869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:10.138374  153366 kubeadm.go:310] [bootstrap-token] Using token: amrfa7.mjk6u0x9vle6unng
	I0826 12:15:10.139885  153366 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:10.140032  153366 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:10.156541  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:10.167826  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:10.174587  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:10.179100  153366 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:10.191798  153366 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:10.465168  153366 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:10.905160  153366 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:11.461111  153366 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:11.461144  153366 kubeadm.go:310] 
	I0826 12:15:11.461234  153366 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:11.461246  153366 kubeadm.go:310] 
	I0826 12:15:11.461381  153366 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:11.461404  153366 kubeadm.go:310] 
	I0826 12:15:11.461439  153366 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:11.461530  153366 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:11.461655  153366 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:11.461667  153366 kubeadm.go:310] 
	I0826 12:15:11.461761  153366 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:11.461776  153366 kubeadm.go:310] 
	I0826 12:15:11.461841  153366 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:11.461855  153366 kubeadm.go:310] 
	I0826 12:15:11.461951  153366 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:11.462070  153366 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:11.462171  153366 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:11.462181  153366 kubeadm.go:310] 
	I0826 12:15:11.462305  153366 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:11.462432  153366 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:11.462443  153366 kubeadm.go:310] 
	I0826 12:15:11.462557  153366 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.462694  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:11.462729  153366 kubeadm.go:310] 	--control-plane 
	I0826 12:15:11.462742  153366 kubeadm.go:310] 
	I0826 12:15:11.462862  153366 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:11.462879  153366 kubeadm.go:310] 
	I0826 12:15:11.463004  153366 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token amrfa7.mjk6u0x9vle6unng \
	I0826 12:15:11.463151  153366 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:11.463695  153366 kubeadm.go:310] W0826 12:15:03.397375    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464127  153366 kubeadm.go:310] W0826 12:15:03.398283    2528 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:11.464277  153366 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:11.464314  153366 cni.go:84] Creating CNI manager for ""
	I0826 12:15:11.464324  153366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:11.467369  153366 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:09.754135  152463 pod_ready.go:103] pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:10.247470  152463 pod_ready.go:82] duration metric: took 4m0.000930829s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" ...
	E0826 12:15:10.247510  152463 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-ldgsl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0826 12:15:10.247531  152463 pod_ready.go:39] duration metric: took 4m13.959337221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:10.247571  152463 kubeadm.go:597] duration metric: took 4m20.649627423s to restartPrimaryControlPlane
	W0826 12:15:10.247641  152463 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0826 12:15:10.247671  152463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:15:11.468809  153366 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:11.480030  153366 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:11.503412  153366 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:11.503518  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:11.503558  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-697869 minikube.k8s.io/updated_at=2024_08_26T12_15_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=default-k8s-diff-port-697869 minikube.k8s.io/primary=true
	I0826 12:15:11.724406  153366 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:11.724524  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.225088  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:12.725598  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.225161  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:13.724619  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.225467  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:14.724756  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.224733  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.724555  153366 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:15.869377  153366 kubeadm.go:1113] duration metric: took 4.365927713s to wait for elevateKubeSystemPrivileges
	I0826 12:15:15.869426  153366 kubeadm.go:394] duration metric: took 4m58.261516694s to StartCluster
	I0826 12:15:15.869450  153366 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.869547  153366 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:15.872248  153366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:15.872615  153366 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:15.872724  153366 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:15.872819  153366 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872837  153366 config.go:182] Loaded profile config "default-k8s-diff-port-697869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:15.872839  153366 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872858  153366 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872872  153366 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:15.872887  153366 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-697869"
	I0826 12:15:15.872908  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872919  153366 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.872927  153366 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:15.872959  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.872890  153366 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-697869"
	I0826 12:15:15.873361  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873403  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873418  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.873366  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.873465  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.874128  153366 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:15.875341  153366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:15.894326  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0826 12:15:15.894578  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0826 12:15:15.895050  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895104  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38885
	I0826 12:15:15.895131  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.895609  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895629  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895612  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.895658  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.895696  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.896010  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896059  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896145  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.896164  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.896261  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.896493  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.896650  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.896675  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.896977  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.897022  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.899881  153366 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-697869"
	W0826 12:15:15.899904  153366 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:15.899935  153366 host.go:66] Checking if "default-k8s-diff-port-697869" exists ...
	I0826 12:15:15.900218  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.900255  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.914959  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0826 12:15:15.915525  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.915993  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.916017  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.916418  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.916451  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0826 12:15:15.916588  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.916681  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0826 12:15:15.916999  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.917629  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.917643  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.918129  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.918298  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.918337  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.919305  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.919920  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.919947  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.920096  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.920226  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.920281  153366 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:15.920702  153366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:15.920724  153366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:15.921464  153366 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:15.921468  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:15.921554  153366 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:15.921575  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.923028  153366 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:15.923051  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:15.923072  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.926224  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926364  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926865  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926877  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.926895  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.926900  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.927101  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927141  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.927313  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927329  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.927509  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927606  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.927677  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.927774  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:15.945639  153366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0826 12:15:15.946164  153366 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:15.946704  153366 main.go:141] libmachine: Using API Version  1
	I0826 12:15:15.946726  153366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:15.947148  153366 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:15.947420  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetState
	I0826 12:15:15.949257  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .DriverName
	I0826 12:15:15.949524  153366 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:15.949544  153366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:15.949573  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHHostname
	I0826 12:15:15.952861  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953407  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:9b:a7", ip: ""} in network mk-default-k8s-diff-port-697869: {Iface:virbr3 ExpiryTime:2024-08-26 13:10:03 +0000 UTC Type:0 Mac:52:54:00:87:9b:a7 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-697869 Clientid:01:52:54:00:87:9b:a7}
	I0826 12:15:15.953440  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | domain default-k8s-diff-port-697869 has defined IP address 192.168.61.11 and MAC address 52:54:00:87:9b:a7 in network mk-default-k8s-diff-port-697869
	I0826 12:15:15.953604  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHPort
	I0826 12:15:15.953816  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHKeyPath
	I0826 12:15:15.953971  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .GetSSHUsername
	I0826 12:15:15.954108  153366 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/default-k8s-diff-port-697869/id_rsa Username:docker}
	I0826 12:15:16.119775  153366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:16.141629  153366 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167775  153366 node_ready.go:49] node "default-k8s-diff-port-697869" has status "Ready":"True"
	I0826 12:15:16.167813  153366 node_ready.go:38] duration metric: took 26.141251ms for node "default-k8s-diff-port-697869" to be "Ready" ...
	I0826 12:15:16.167823  153366 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:16.174824  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:16.265371  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:16.273443  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:16.273479  153366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:16.295175  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:16.301027  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:16.301063  153366 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:16.351346  153366 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:16.351372  153366 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:16.536263  153366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:17.254787  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254820  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.254872  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.254896  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255317  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255371  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255394  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255396  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255397  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255354  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255412  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255447  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255425  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.255497  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.255721  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255735  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.255839  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.255860  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.255883  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.279566  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.279589  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.279893  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.279914  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792266  153366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255954534s)
	I0826 12:15:17.792329  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792341  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792687  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.792714  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.792727  153366 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:17.792737  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) Calling .Close
	I0826 12:15:17.792693  153366 main.go:141] libmachine: (default-k8s-diff-port-697869) DBG | Closing plugin on server side
	I0826 12:15:17.793052  153366 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:17.793070  153366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:17.793083  153366 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-697869"
	I0826 12:15:17.795156  153366 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0826 12:15:17.796583  153366 addons.go:510] duration metric: took 1.923858399s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0826 12:15:18.183088  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.682427  153366 pod_ready.go:103] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:20.903394  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:15:20.903620  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:15:21.684011  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.684037  153366 pod_ready.go:82] duration metric: took 5.509158352s for pod "coredns-6f6b679f8f-9tm7v" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.684047  153366 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689145  153366 pod_ready.go:93] pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:21.689170  153366 pod_ready.go:82] duration metric: took 5.117406ms for pod "coredns-6f6b679f8f-mg7dz" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:21.689180  153366 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695856  153366 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.695897  153366 pod_ready.go:82] duration metric: took 2.006709056s for pod "etcd-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.695912  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700548  153366 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.700572  153366 pod_ready.go:82] duration metric: took 4.650988ms for pod "kube-apiserver-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.700583  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705425  153366 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.705449  153366 pod_ready.go:82] duration metric: took 4.857442ms for pod "kube-controller-manager-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.705461  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710336  153366 pod_ready.go:93] pod "kube-proxy-fkklg" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:23.710368  153366 pod_ready.go:82] duration metric: took 4.897388ms for pod "kube-proxy-fkklg" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:23.710380  153366 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079760  153366 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:24.079791  153366 pod_ready.go:82] duration metric: took 369.402007ms for pod "kube-scheduler-default-k8s-diff-port-697869" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:24.079803  153366 pod_ready.go:39] duration metric: took 7.911968599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:24.079826  153366 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:24.079905  153366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:24.096351  153366 api_server.go:72] duration metric: took 8.22368917s to wait for apiserver process to appear ...
	I0826 12:15:24.096380  153366 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:24.096401  153366 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0826 12:15:24.100636  153366 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0826 12:15:24.102197  153366 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:24.102228  153366 api_server.go:131] duration metric: took 5.839895ms to wait for apiserver health ...
	I0826 12:15:24.102239  153366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:24.282080  153366 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:24.282111  153366 system_pods.go:61] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.282116  153366 system_pods.go:61] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.282120  153366 system_pods.go:61] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.282124  153366 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.282128  153366 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.282131  153366 system_pods.go:61] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.282134  153366 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.282141  153366 system_pods.go:61] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.282148  153366 system_pods.go:61] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.282160  153366 system_pods.go:74] duration metric: took 179.913782ms to wait for pod list to return data ...
	I0826 12:15:24.282174  153366 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:24.478697  153366 default_sa.go:45] found service account: "default"
	I0826 12:15:24.478725  153366 default_sa.go:55] duration metric: took 196.543227ms for default service account to be created ...
	I0826 12:15:24.478735  153366 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:24.681990  153366 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:24.682024  153366 system_pods.go:89] "coredns-6f6b679f8f-9tm7v" [5aa79a64-1ea3-4734-99cf-70ea69b3fce3] Running
	I0826 12:15:24.682033  153366 system_pods.go:89] "coredns-6f6b679f8f-mg7dz" [8d15394d-faa4-4bee-a118-346247df5600] Running
	I0826 12:15:24.682039  153366 system_pods.go:89] "etcd-default-k8s-diff-port-697869" [9076e84f-e9d4-431f-8821-5999fbcc3041] Running
	I0826 12:15:24.682047  153366 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-697869" [f60d54b4-7828-4eab-8880-7dba1d0f8934] Running
	I0826 12:15:24.682053  153366 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-697869" [258f7c93-00c3-467d-a223-17a32435d8fc] Running
	I0826 12:15:24.682059  153366 system_pods.go:89] "kube-proxy-fkklg" [337f5f37-fc3a-45fc-83f0-def91ba4c7af] Running
	I0826 12:15:24.682064  153366 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-697869" [160315a9-42b2-490e-ab11-bcc8789f4440] Running
	I0826 12:15:24.682074  153366 system_pods.go:89] "metrics-server-6867b74b74-7d2qs" [c6f45f4a-ec10-4f9d-8e75-bfa9aad9363d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:24.682084  153366 system_pods.go:89] "storage-provisioner" [3becb878-fd98-4476-9c05-cfb6260d2e0a] Running
	I0826 12:15:24.682099  153366 system_pods.go:126] duration metric: took 203.358223ms to wait for k8s-apps to be running ...
	I0826 12:15:24.682112  153366 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:24.682176  153366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:24.696733  153366 system_svc.go:56] duration metric: took 14.61027ms WaitForService to wait for kubelet
	I0826 12:15:24.696763  153366 kubeadm.go:582] duration metric: took 8.824109304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:24.696783  153366 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:24.879924  153366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:24.879956  153366 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:24.879966  153366 node_conditions.go:105] duration metric: took 183.178992ms to run NodePressure ...
	I0826 12:15:24.879990  153366 start.go:241] waiting for startup goroutines ...
	I0826 12:15:24.879997  153366 start.go:246] waiting for cluster config update ...
	I0826 12:15:24.880010  153366 start.go:255] writing updated cluster config ...
	I0826 12:15:24.880311  153366 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:24.930941  153366 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:24.933196  153366 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-697869" cluster and "default" namespace by default
	I0826 12:15:36.323870  152463 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.076163509s)
	I0826 12:15:36.323965  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:36.347973  152463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0826 12:15:36.368968  152463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:15:36.382879  152463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:15:36.382903  152463 kubeadm.go:157] found existing configuration files:
	
	I0826 12:15:36.382963  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:15:36.416659  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:15:36.416743  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:15:36.429514  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:15:36.451301  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:15:36.451385  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:15:36.462051  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.472004  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:15:36.472067  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:15:36.482273  152463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:15:36.492841  152463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:15:36.492912  152463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:15:36.504817  152463 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:15:36.551754  152463 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0826 12:15:36.551829  152463 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:15:36.672687  152463 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:15:36.672864  152463 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:15:36.672989  152463 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0826 12:15:36.683235  152463 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:15:36.685324  152463 out.go:235]   - Generating certificates and keys ...
	I0826 12:15:36.685440  152463 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:15:36.685547  152463 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:15:36.685629  152463 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:15:36.685682  152463 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:15:36.685739  152463 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:15:36.685783  152463 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:15:36.685831  152463 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:15:36.686022  152463 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:15:36.686468  152463 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:15:36.686945  152463 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:15:36.687303  152463 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:15:36.687378  152463 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:15:36.967134  152463 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:15:37.077904  152463 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0826 12:15:37.371185  152463 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:15:37.555065  152463 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:15:37.634464  152463 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:15:37.634927  152463 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:15:37.638560  152463 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:15:37.640588  152463 out.go:235]   - Booting up control plane ...
	I0826 12:15:37.640726  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:15:37.640832  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:15:37.642937  152463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:15:37.662774  152463 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:15:37.672492  152463 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:15:37.672548  152463 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:15:37.813958  152463 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0826 12:15:37.814108  152463 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0826 12:15:38.316718  152463 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.741081ms
	I0826 12:15:38.316861  152463 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0826 12:15:43.318178  152463 kubeadm.go:310] [api-check] The API server is healthy after 5.001355764s
	I0826 12:15:43.331536  152463 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0826 12:15:43.349535  152463 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0826 12:15:43.387824  152463 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0826 12:15:43.388114  152463 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-956479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0826 12:15:43.405027  152463 kubeadm.go:310] [bootstrap-token] Using token: ukbhjp.blg8kbhpg1wwmixs
	I0826 12:15:43.406880  152463 out.go:235]   - Configuring RBAC rules ...
	I0826 12:15:43.407022  152463 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0826 12:15:43.422870  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0826 12:15:43.436842  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0826 12:15:43.444123  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0826 12:15:43.454773  152463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0826 12:15:43.467173  152463 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0826 12:15:43.727266  152463 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0826 12:15:44.155916  152463 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0826 12:15:44.726922  152463 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0826 12:15:44.727276  152463 kubeadm.go:310] 
	I0826 12:15:44.727355  152463 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0826 12:15:44.727366  152463 kubeadm.go:310] 
	I0826 12:15:44.727452  152463 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0826 12:15:44.727461  152463 kubeadm.go:310] 
	I0826 12:15:44.727501  152463 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0826 12:15:44.727596  152463 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0826 12:15:44.727678  152463 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0826 12:15:44.727692  152463 kubeadm.go:310] 
	I0826 12:15:44.727778  152463 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0826 12:15:44.727803  152463 kubeadm.go:310] 
	I0826 12:15:44.727880  152463 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0826 12:15:44.727890  152463 kubeadm.go:310] 
	I0826 12:15:44.727958  152463 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0826 12:15:44.728059  152463 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0826 12:15:44.728157  152463 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0826 12:15:44.728170  152463 kubeadm.go:310] 
	I0826 12:15:44.728278  152463 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0826 12:15:44.728381  152463 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0826 12:15:44.728390  152463 kubeadm.go:310] 
	I0826 12:15:44.728500  152463 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.728621  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 \
	I0826 12:15:44.728650  152463 kubeadm.go:310] 	--control-plane 
	I0826 12:15:44.728655  152463 kubeadm.go:310] 
	I0826 12:15:44.728763  152463 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0826 12:15:44.728773  152463 kubeadm.go:310] 
	I0826 12:15:44.728879  152463 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ukbhjp.blg8kbhpg1wwmixs \
	I0826 12:15:44.729000  152463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2a103eaa606fd643b4c785b53ee3ba4be9e33a6c5b4f050653fb049cfdc05123 
	I0826 12:15:44.730448  152463 kubeadm.go:310] W0826 12:15:36.526674    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730826  152463 kubeadm.go:310] W0826 12:15:36.527559    3047 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0826 12:15:44.730958  152463 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:15:44.730985  152463 cni.go:84] Creating CNI manager for ""
	I0826 12:15:44.731006  152463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 12:15:44.732918  152463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0826 12:15:44.734123  152463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0826 12:15:44.746466  152463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0826 12:15:44.766371  152463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0826 12:15:44.766444  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:44.766500  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-956479 minikube.k8s.io/updated_at=2024_08_26T12_15_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fc24c67cee4697ef6a65557a82c91e2bacef62ff minikube.k8s.io/name=no-preload-956479 minikube.k8s.io/primary=true
	I0826 12:15:44.816160  152463 ops.go:34] apiserver oom_adj: -16
	I0826 12:15:44.979504  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.479661  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:45.980448  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.479729  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:46.980060  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.479789  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:47.980142  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.479669  152463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0826 12:15:48.567890  152463 kubeadm.go:1113] duration metric: took 3.801513957s to wait for elevateKubeSystemPrivileges
	I0826 12:15:48.567928  152463 kubeadm.go:394] duration metric: took 4m59.024259276s to StartCluster
	I0826 12:15:48.567954  152463 settings.go:142] acquiring lock: {Name:mk64fc9cd88f714bad927a43ee49b3a00c574837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.568058  152463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 12:15:48.569638  152463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/kubeconfig: {Name:mka41e16562183bd06715ff80a6650a428afe1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 12:15:48.569928  152463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0826 12:15:48.570009  152463 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0826 12:15:48.570072  152463 addons.go:69] Setting storage-provisioner=true in profile "no-preload-956479"
	I0826 12:15:48.570106  152463 addons.go:234] Setting addon storage-provisioner=true in "no-preload-956479"
	W0826 12:15:48.570120  152463 addons.go:243] addon storage-provisioner should already be in state true
	I0826 12:15:48.570111  152463 addons.go:69] Setting default-storageclass=true in profile "no-preload-956479"
	I0826 12:15:48.570136  152463 addons.go:69] Setting metrics-server=true in profile "no-preload-956479"
	I0826 12:15:48.570154  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570164  152463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956479"
	I0826 12:15:48.570168  152463 addons.go:234] Setting addon metrics-server=true in "no-preload-956479"
	W0826 12:15:48.570179  152463 addons.go:243] addon metrics-server should already be in state true
	I0826 12:15:48.570189  152463 config.go:182] Loaded profile config "no-preload-956479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 12:15:48.570209  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.570485  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570551  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570575  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570609  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.570621  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.570654  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.572265  152463 out.go:177] * Verifying Kubernetes components...
	I0826 12:15:48.573970  152463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0826 12:15:48.587085  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0826 12:15:48.587132  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0826 12:15:48.587291  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0826 12:15:48.587551  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.587597  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588312  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588331  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588376  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.588491  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588509  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.588696  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588878  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.588965  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.588978  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.589237  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589273  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589402  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.589427  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.589780  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.590142  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.593429  152463 addons.go:234] Setting addon default-storageclass=true in "no-preload-956479"
	W0826 12:15:48.593450  152463 addons.go:243] addon default-storageclass should already be in state true
	I0826 12:15:48.593479  152463 host.go:66] Checking if "no-preload-956479" exists ...
	I0826 12:15:48.593765  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.593796  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.606920  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0826 12:15:48.607123  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0826 12:15:48.607641  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.607775  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.608233  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608253  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608389  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.608401  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.608881  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609068  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.609126  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.609286  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.611449  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0826 12:15:48.611638  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612161  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.612164  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.612932  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.612954  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.613327  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.613815  152463 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0826 12:15:48.614020  152463 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0826 12:15:48.614913  152463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 12:15:48.614969  152463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 12:15:48.615993  152463 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:48.616019  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0826 12:15:48.616035  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.616812  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0826 12:15:48.616831  152463 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0826 12:15:48.616854  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.619999  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.620553  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.620591  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621355  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.621629  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.621699  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621845  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.621868  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.621914  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622126  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.622296  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.622459  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.622662  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.622728  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.633310  152463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0826 12:15:48.633834  152463 main.go:141] libmachine: () Calling .GetVersion
	I0826 12:15:48.634438  152463 main.go:141] libmachine: Using API Version  1
	I0826 12:15:48.634492  152463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 12:15:48.634892  152463 main.go:141] libmachine: () Calling .GetMachineName
	I0826 12:15:48.635131  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetState
	I0826 12:15:48.636967  152463 main.go:141] libmachine: (no-preload-956479) Calling .DriverName
	I0826 12:15:48.637184  152463 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.637204  152463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0826 12:15:48.637225  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHHostname
	I0826 12:15:48.640306  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.640677  152463 main.go:141] libmachine: (no-preload-956479) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:57:47", ip: ""} in network mk-no-preload-956479: {Iface:virbr2 ExpiryTime:2024-08-26 13:00:50 +0000 UTC Type:0 Mac:52:54:00:dd:57:47 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:no-preload-956479 Clientid:01:52:54:00:dd:57:47}
	I0826 12:15:48.640710  152463 main.go:141] libmachine: (no-preload-956479) DBG | domain no-preload-956479 has defined IP address 192.168.50.213 and MAC address 52:54:00:dd:57:47 in network mk-no-preload-956479
	I0826 12:15:48.641042  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHPort
	I0826 12:15:48.641260  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHKeyPath
	I0826 12:15:48.641483  152463 main.go:141] libmachine: (no-preload-956479) Calling .GetSSHUsername
	I0826 12:15:48.641743  152463 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/no-preload-956479/id_rsa Username:docker}
	I0826 12:15:48.771258  152463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0826 12:15:48.788808  152463 node_ready.go:35] waiting up to 6m0s for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800881  152463 node_ready.go:49] node "no-preload-956479" has status "Ready":"True"
	I0826 12:15:48.800916  152463 node_ready.go:38] duration metric: took 12.068483ms for node "no-preload-956479" to be "Ready" ...
	I0826 12:15:48.800926  152463 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:48.806760  152463 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:48.859878  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0826 12:15:48.859902  152463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0826 12:15:48.863874  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0826 12:15:48.884910  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0826 12:15:48.884940  152463 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0826 12:15:48.905108  152463 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.905139  152463 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0826 12:15:48.929466  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0826 12:15:48.968025  152463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0826 12:15:49.143607  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.143634  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.143980  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.144039  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144048  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144056  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.144063  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.144396  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.144421  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:49.144399  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177127  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:49.177157  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:49.177586  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:49.177590  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:49.177610  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170421  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240899569s)
	I0826 12:15:50.170493  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170509  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.170879  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.170896  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.170919  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.170934  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.170947  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.171212  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.171232  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.171278  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.171294  152463 addons.go:475] Verifying addon metrics-server=true in "no-preload-956479"
	I0826 12:15:50.240347  152463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.272272683s)
	I0826 12:15:50.240403  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240416  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.240837  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.240861  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.240867  152463 main.go:141] libmachine: (no-preload-956479) DBG | Closing plugin on server side
	I0826 12:15:50.240871  152463 main.go:141] libmachine: Making call to close driver server
	I0826 12:15:50.240906  152463 main.go:141] libmachine: (no-preload-956479) Calling .Close
	I0826 12:15:50.241192  152463 main.go:141] libmachine: Successfully made call to close driver server
	I0826 12:15:50.241208  152463 main.go:141] libmachine: Making call to close connection to plugin binary
	I0826 12:15:50.243352  152463 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0826 12:15:50.244857  152463 addons.go:510] duration metric: took 1.674848626s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0826 12:15:50.821689  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:53.313148  152463 pod_ready.go:103] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:54.313605  152463 pod_ready.go:93] pod "etcd-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:54.313634  152463 pod_ready.go:82] duration metric: took 5.506845108s for pod "etcd-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:54.313646  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.320782  152463 pod_ready.go:103] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"False"
	I0826 12:15:56.822596  152463 pod_ready.go:93] pod "kube-apiserver-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.822626  152463 pod_ready.go:82] duration metric: took 2.508972184s for pod "kube-apiserver-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.822652  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829472  152463 pod_ready.go:93] pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.829497  152463 pod_ready.go:82] duration metric: took 6.836827ms for pod "kube-controller-manager-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.829508  152463 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835063  152463 pod_ready.go:93] pod "kube-scheduler-no-preload-956479" in "kube-system" namespace has status "Ready":"True"
	I0826 12:15:56.835087  152463 pod_ready.go:82] duration metric: took 5.573211ms for pod "kube-scheduler-no-preload-956479" in "kube-system" namespace to be "Ready" ...
	I0826 12:15:56.835095  152463 pod_ready.go:39] duration metric: took 8.03415934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0826 12:15:56.835111  152463 api_server.go:52] waiting for apiserver process to appear ...
	I0826 12:15:56.835162  152463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 12:15:56.852565  152463 api_server.go:72] duration metric: took 8.282599518s to wait for apiserver process to appear ...
	I0826 12:15:56.852595  152463 api_server.go:88] waiting for apiserver healthz status ...
	I0826 12:15:56.852614  152463 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0826 12:15:56.857431  152463 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0826 12:15:56.858525  152463 api_server.go:141] control plane version: v1.31.0
	I0826 12:15:56.858548  152463 api_server.go:131] duration metric: took 5.945927ms to wait for apiserver health ...
	I0826 12:15:56.858556  152463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0826 12:15:56.863726  152463 system_pods.go:59] 9 kube-system pods found
	I0826 12:15:56.863750  152463 system_pods.go:61] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.863757  152463 system_pods.go:61] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.863762  152463 system_pods.go:61] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.863768  152463 system_pods.go:61] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.863773  152463 system_pods.go:61] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.863776  152463 system_pods.go:61] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.863780  152463 system_pods.go:61] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.863784  152463 system_pods.go:61] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.863788  152463 system_pods.go:61] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.863794  152463 system_pods.go:74] duration metric: took 5.233096ms to wait for pod list to return data ...
	I0826 12:15:56.863801  152463 default_sa.go:34] waiting for default service account to be created ...
	I0826 12:15:56.866245  152463 default_sa.go:45] found service account: "default"
	I0826 12:15:56.866263  152463 default_sa.go:55] duration metric: took 2.456594ms for default service account to be created ...
	I0826 12:15:56.866270  152463 system_pods.go:116] waiting for k8s-apps to be running ...
	I0826 12:15:56.870592  152463 system_pods.go:86] 9 kube-system pods found
	I0826 12:15:56.870614  152463 system_pods.go:89] "coredns-6f6b679f8f-8489w" [2bcfb870-46aa-4ec1-b958-707896e53120] Running
	I0826 12:15:56.870621  152463 system_pods.go:89] "coredns-6f6b679f8f-wnd26" [94b517df-9201-4602-a58f-77617a38d641] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0826 12:15:56.870626  152463 system_pods.go:89] "etcd-no-preload-956479" [5900262e-0d5b-4073-aedb-f49f95ab9d6e] Running
	I0826 12:15:56.870634  152463 system_pods.go:89] "kube-apiserver-no-preload-956479" [e486a233-1e91-49b4-b257-91c8ec9cd314] Running
	I0826 12:15:56.870640  152463 system_pods.go:89] "kube-controller-manager-no-preload-956479" [75c23582-0daa-4812-af52-e1e3d343a047] Running
	I0826 12:15:56.870645  152463 system_pods.go:89] "kube-proxy-gwj5w" [18bfe796-2c64-420d-a01d-ea68c56573c7] Running
	I0826 12:15:56.870656  152463 system_pods.go:89] "kube-scheduler-no-preload-956479" [4fc2e243-39ed-451c-80f1-706669a833f9] Running
	I0826 12:15:56.870663  152463 system_pods.go:89] "metrics-server-6867b74b74-gmfbr" [558889e1-e85a-45ef-9636-892204c4cf48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0826 12:15:56.870673  152463 system_pods.go:89] "storage-provisioner" [b0640b7f-39d3-4fb1-b78c-2f1f970646ae] Running
	I0826 12:15:56.870681  152463 system_pods.go:126] duration metric: took 4.405758ms to wait for k8s-apps to be running ...
	I0826 12:15:56.870688  152463 system_svc.go:44] waiting for kubelet service to be running ....
	I0826 12:15:56.870736  152463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:15:56.886533  152463 system_svc.go:56] duration metric: took 15.833026ms WaitForService to wait for kubelet
	I0826 12:15:56.886582  152463 kubeadm.go:582] duration metric: took 8.316620619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0826 12:15:56.886607  152463 node_conditions.go:102] verifying NodePressure condition ...
	I0826 12:15:56.895864  152463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0826 12:15:56.895902  152463 node_conditions.go:123] node cpu capacity is 2
	I0826 12:15:56.895917  152463 node_conditions.go:105] duration metric: took 9.302123ms to run NodePressure ...
	I0826 12:15:56.895934  152463 start.go:241] waiting for startup goroutines ...
	I0826 12:15:56.895945  152463 start.go:246] waiting for cluster config update ...
	I0826 12:15:56.895960  152463 start.go:255] writing updated cluster config ...
	I0826 12:15:56.896336  152463 ssh_runner.go:195] Run: rm -f paused
	I0826 12:15:56.947198  152463 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0826 12:15:56.949119  152463 out.go:177] * Done! kubectl is now configured to use "no-preload-956479" cluster and "default" namespace by default
	I0826 12:16:00.905372  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:00.905692  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:00.905720  152982 kubeadm.go:310] 
	I0826 12:16:00.905753  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:16:00.905784  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:16:00.905791  152982 kubeadm.go:310] 
	I0826 12:16:00.905819  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:16:00.905877  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:16:00.906033  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:16:00.906050  152982 kubeadm.go:310] 
	I0826 12:16:00.906190  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:16:00.906257  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:16:00.906304  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:16:00.906311  152982 kubeadm.go:310] 
	I0826 12:16:00.906444  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:16:00.906687  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:16:00.906700  152982 kubeadm.go:310] 
	I0826 12:16:00.906794  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:16:00.906945  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:16:00.907050  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:16:00.907167  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:16:00.907184  152982 kubeadm.go:310] 
	I0826 12:16:00.907768  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:16:00.907869  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:16:00.907959  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0826 12:16:00.908103  152982 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0826 12:16:00.908168  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0826 12:16:01.392633  152982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 12:16:01.408303  152982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0826 12:16:01.419069  152982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0826 12:16:01.419104  152982 kubeadm.go:157] found existing configuration files:
	
	I0826 12:16:01.419162  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0826 12:16:01.429440  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0826 12:16:01.429513  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0826 12:16:01.440092  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0826 12:16:01.450451  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0826 12:16:01.450528  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0826 12:16:01.461166  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.472084  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0826 12:16:01.472155  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0826 12:16:01.482791  152982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0826 12:16:01.493636  152982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0826 12:16:01.493737  152982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0826 12:16:01.504679  152982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0826 12:16:01.576700  152982 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0826 12:16:01.576854  152982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0826 12:16:01.728501  152982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0826 12:16:01.728682  152982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0826 12:16:01.728846  152982 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0826 12:16:01.928072  152982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0826 12:16:01.929877  152982 out.go:235]   - Generating certificates and keys ...
	I0826 12:16:01.929988  152982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0826 12:16:01.930128  152982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0826 12:16:01.930271  152982 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0826 12:16:01.930373  152982 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0826 12:16:01.930484  152982 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0826 12:16:01.930593  152982 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0826 12:16:01.930680  152982 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0826 12:16:01.930766  152982 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0826 12:16:01.931012  152982 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0826 12:16:01.931363  152982 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0826 12:16:01.931414  152982 kubeadm.go:310] [certs] Using the existing "sa" key
	I0826 12:16:01.931593  152982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0826 12:16:02.054133  152982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0826 12:16:02.301995  152982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0826 12:16:02.372665  152982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0826 12:16:02.823940  152982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0826 12:16:02.844516  152982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0826 12:16:02.844641  152982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0826 12:16:02.844724  152982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0826 12:16:02.995838  152982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0826 12:16:02.997571  152982 out.go:235]   - Booting up control plane ...
	I0826 12:16:02.997707  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0826 12:16:02.999055  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0826 12:16:03.000691  152982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0826 12:16:03.010427  152982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0826 12:16:03.013494  152982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0826 12:16:43.016147  152982 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0826 12:16:43.016271  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:43.016481  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:48.016709  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:48.016976  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:16:58.017776  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:16:58.018006  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:18.018369  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:18.018592  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.017759  152982 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0826 12:17:58.018053  152982 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0826 12:17:58.018084  152982 kubeadm.go:310] 
	I0826 12:17:58.018121  152982 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0826 12:17:58.018157  152982 kubeadm.go:310] 		timed out waiting for the condition
	I0826 12:17:58.018163  152982 kubeadm.go:310] 
	I0826 12:17:58.018192  152982 kubeadm.go:310] 	This error is likely caused by:
	I0826 12:17:58.018224  152982 kubeadm.go:310] 		- The kubelet is not running
	I0826 12:17:58.018310  152982 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0826 12:17:58.018337  152982 kubeadm.go:310] 
	I0826 12:17:58.018477  152982 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0826 12:17:58.018537  152982 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0826 12:17:58.018619  152982 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0826 12:17:58.018633  152982 kubeadm.go:310] 
	I0826 12:17:58.018723  152982 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0826 12:17:58.018810  152982 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0826 12:17:58.018820  152982 kubeadm.go:310] 
	I0826 12:17:58.019007  152982 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0826 12:17:58.019157  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0826 12:17:58.019291  152982 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0826 12:17:58.019403  152982 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0826 12:17:58.019414  152982 kubeadm.go:310] 
	I0826 12:17:58.020426  152982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0826 12:17:58.020541  152982 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0826 12:17:58.020627  152982 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0826 12:17:58.020705  152982 kubeadm.go:394] duration metric: took 7m57.559327665s to StartCluster
	I0826 12:17:58.020799  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0826 12:17:58.020875  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0826 12:17:58.061950  152982 cri.go:89] found id: ""
	I0826 12:17:58.061979  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.061989  152982 logs.go:278] No container was found matching "kube-apiserver"
	I0826 12:17:58.061998  152982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0826 12:17:58.062057  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0826 12:17:58.100419  152982 cri.go:89] found id: ""
	I0826 12:17:58.100451  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.100465  152982 logs.go:278] No container was found matching "etcd"
	I0826 12:17:58.100474  152982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0826 12:17:58.100536  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0826 12:17:58.135329  152982 cri.go:89] found id: ""
	I0826 12:17:58.135360  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.135369  152982 logs.go:278] No container was found matching "coredns"
	I0826 12:17:58.135378  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0826 12:17:58.135472  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0826 12:17:58.169826  152982 cri.go:89] found id: ""
	I0826 12:17:58.169858  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.169870  152982 logs.go:278] No container was found matching "kube-scheduler"
	I0826 12:17:58.169888  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0826 12:17:58.169958  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0826 12:17:58.204549  152982 cri.go:89] found id: ""
	I0826 12:17:58.204583  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.204593  152982 logs.go:278] No container was found matching "kube-proxy"
	I0826 12:17:58.204600  152982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0826 12:17:58.204668  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0826 12:17:58.241886  152982 cri.go:89] found id: ""
	I0826 12:17:58.241917  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.241926  152982 logs.go:278] No container was found matching "kube-controller-manager"
	I0826 12:17:58.241933  152982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0826 12:17:58.241997  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0826 12:17:58.276159  152982 cri.go:89] found id: ""
	I0826 12:17:58.276194  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.276206  152982 logs.go:278] No container was found matching "kindnet"
	I0826 12:17:58.276220  152982 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0826 12:17:58.276288  152982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0826 12:17:58.311319  152982 cri.go:89] found id: ""
	I0826 12:17:58.311352  152982 logs.go:276] 0 containers: []
	W0826 12:17:58.311364  152982 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0826 12:17:58.311377  152982 logs.go:123] Gathering logs for kubelet ...
	I0826 12:17:58.311394  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0826 12:17:58.365300  152982 logs.go:123] Gathering logs for dmesg ...
	I0826 12:17:58.365352  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0826 12:17:58.378933  152982 logs.go:123] Gathering logs for describe nodes ...
	I0826 12:17:58.378972  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0826 12:17:58.464890  152982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0826 12:17:58.464920  152982 logs.go:123] Gathering logs for CRI-O ...
	I0826 12:17:58.464939  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0826 12:17:58.581032  152982 logs.go:123] Gathering logs for container status ...
	I0826 12:17:58.581076  152982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0826 12:17:58.633835  152982 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0826 12:17:58.633919  152982 out.go:270] * 
	W0826 12:17:58.634025  152982 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.634049  152982 out.go:270] * 
	W0826 12:17:58.635201  152982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0826 12:17:58.639004  152982 out.go:201] 
	W0826 12:17:58.640230  152982 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0826 12:17:58.640308  152982 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0826 12:17:58.640327  152982 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0826 12:17:58.641876  152982 out.go:201] 
	
	
	==> CRI-O <==
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.932842742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675335932820011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a846096d-01e4-42e8-8f5b-f413dfd77eb2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.933504464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9474cba-62f7-450f-9025-47b5179c3afa name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.933575857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9474cba-62f7-450f-9025-47b5179c3afa name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.933612103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a9474cba-62f7-450f-9025-47b5179c3afa name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.964383580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4960ba5d-b5cb-4c15-8e8b-b988d937f1b7 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.964526235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4960ba5d-b5cb-4c15-8e8b-b988d937f1b7 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.965658981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a51ef8d-5732-461d-8aa2-bf07c9817b1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.966068924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675335966041108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a51ef8d-5732-461d-8aa2-bf07c9817b1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.966594797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3cd3c72-9968-4d78-a690-8285d1dbd784 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.966657385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3cd3c72-9968-4d78-a690-8285d1dbd784 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:55 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:55.966719714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f3cd3c72-9968-4d78-a690-8285d1dbd784 name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.002914358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1243e786-5b4a-4583-adfe-315681d7d297 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.003027687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1243e786-5b4a-4583-adfe-315681d7d297 name=/runtime.v1.RuntimeService/Version
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.004362398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c80ea717-11c4-4ed1-a811-8ac0835bea79 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.004924568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675336004898406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c80ea717-11c4-4ed1-a811-8ac0835bea79 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.005597939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c47df435-a0d0-4587-9e5a-c9577fd6929f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.005678012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c47df435-a0d0-4587-9e5a-c9577fd6929f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.005744473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c47df435-a0d0-4587-9e5a-c9577fd6929f name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.035995866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f37cd3d-d21f-4635-a5cd-b5e7a813742d name=/runtime.v1.RuntimeService/Version
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.036116057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f37cd3d-d21f-4635-a5cd-b5e7a813742d name=/runtime.v1.RuntimeService/Version
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.037897659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67e326b3-2600-48f6-9397-36fa006af355 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.038528889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724675336038476052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67e326b3-2600-48f6-9397-36fa006af355 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.039261338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=941697f4-3b88-41d0-a743-76c524a5d1dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.039355244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=941697f4-3b88-41d0-a743-76c524a5d1dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 26 12:28:56 old-k8s-version-839656 crio[650]: time="2024-08-26 12:28:56.039411707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=941697f4-3b88-41d0-a743-76c524a5d1dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug26 12:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052898] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039892] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.851891] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935402] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.449604] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.385904] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.067684] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067976] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.189122] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.154809] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.263872] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.466854] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.059639] kauditd_printk_skb: 130 callbacks suppressed
	[Aug26 12:10] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	[ +12.058589] kauditd_printk_skb: 46 callbacks suppressed
	[Aug26 12:14] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Aug26 12:16] systemd-fstab-generator[5304]: Ignoring "noauto" option for root device
	[  +0.068224] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:28:56 up 19 min,  0 users,  load average: 0.22, 0.10, 0.07
	Linux old-k8s-version-839656 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: net.(*Dialer).DialContext(0xc000c3d620, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000057e00, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c4a100, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000057e00, 0x24, 0x60, 0x7f9738d80550, 0x118, ...)
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: net/http.(*Transport).dial(0xc0007e4a00, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000057e00, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: net/http.(*Transport).dialConn(0xc0007e4a00, 0x4f7fe00, 0xc000052030, 0x0, 0xc00028ed20, 0x5, 0xc000057e00, 0x24, 0x0, 0xc0004f2120, ...)
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: net/http.(*Transport).dialConnFor(0xc0007e4a00, 0xc000091ce0)
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: created by net/http.(*Transport).queueForDial
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: goroutine 157 [select]:
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000343d40, 0xc000c6af00, 0xc00028efc0, 0xc00028ef60)
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]: created by net.(*netFD).connect
	Aug 26 12:28:53 old-k8s-version-839656 kubelet[6749]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Aug 26 12:28:54 old-k8s-version-839656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 134.
	Aug 26 12:28:54 old-k8s-version-839656 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 26 12:28:54 old-k8s-version-839656 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 26 12:28:54 old-k8s-version-839656 kubelet[6757]: I0826 12:28:54.090274    6757 server.go:416] Version: v1.20.0
	Aug 26 12:28:54 old-k8s-version-839656 kubelet[6757]: I0826 12:28:54.090709    6757 server.go:837] Client rotation is on, will bootstrap in background
	Aug 26 12:28:54 old-k8s-version-839656 kubelet[6757]: I0826 12:28:54.092673    6757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 26 12:28:54 old-k8s-version-839656 kubelet[6757]: I0826 12:28:54.093706    6757 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 26 12:28:54 old-k8s-version-839656 kubelet[6757]: W0826 12:28:54.093803    6757 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 2 (243.266453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-839656" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (111.73s)

                                                
                                    

Test pass (245/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.19
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 12.34
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.14
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 90.09
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 140.13
31 TestAddons/serial/GCPAuth/Namespaces 0.15
33 TestAddons/parallel/Registry 16.21
35 TestAddons/parallel/InspektorGadget 11.84
37 TestAddons/parallel/HelmTiller 11.54
39 TestAddons/parallel/CSI 62.56
40 TestAddons/parallel/Headlamp 19
41 TestAddons/parallel/CloudSpanner 5.68
42 TestAddons/parallel/LocalPath 57.62
43 TestAddons/parallel/NvidiaDevicePlugin 6.91
44 TestAddons/parallel/Yakd 11.1
46 TestCertOptions 88.87
47 TestCertExpiration 376.9
49 TestForceSystemdFlag 74.54
50 TestForceSystemdEnv 46.28
52 TestKVMDriverInstallOrUpdate 3.93
56 TestErrorSpam/setup 42.26
57 TestErrorSpam/start 0.37
58 TestErrorSpam/status 0.77
59 TestErrorSpam/pause 1.64
60 TestErrorSpam/unpause 1.7
61 TestErrorSpam/stop 4.8
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 56.83
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.6
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 5.36
73 TestFunctional/serial/CacheCmd/cache/add_local 2.66
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
81 TestFunctional/serial/ExtraConfig 33.78
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.39
84 TestFunctional/serial/LogsFileCmd 1.44
85 TestFunctional/serial/InvalidService 4.25
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 11.47
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 1.08
95 TestFunctional/parallel/ServiceCmdConnect 22.61
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 46.87
99 TestFunctional/parallel/SSHCmd 0.46
100 TestFunctional/parallel/CpCmd 1.41
101 TestFunctional/parallel/MySQL 26.14
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.5
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
111 TestFunctional/parallel/License 0.98
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
113 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
125 TestFunctional/parallel/ServiceCmd/List 0.45
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
128 TestFunctional/parallel/ServiceCmd/Format 0.34
129 TestFunctional/parallel/ServiceCmd/URL 0.31
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
131 TestFunctional/parallel/ProfileCmd/profile_list 0.55
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
133 TestFunctional/parallel/MountCmd/any-port 14.52
134 TestFunctional/parallel/Version/short 0.05
135 TestFunctional/parallel/Version/components 0.62
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.59
141 TestFunctional/parallel/ImageCommands/Setup 1.89
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.75
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.9
145 TestFunctional/parallel/MountCmd/specific-port 2.06
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.86
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.93
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.15
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 195.22
158 TestMultiControlPlane/serial/DeployApp 6
159 TestMultiControlPlane/serial/PingHostFromPods 1.24
160 TestMultiControlPlane/serial/AddWorkerNode 58.79
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
163 TestMultiControlPlane/serial/CopyFile 13.17
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.49
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.42
172 TestMultiControlPlane/serial/RestartCluster 455.36
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
174 TestMultiControlPlane/serial/AddSecondaryNode 80.06
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
179 TestJSONOutput/start/Command 58.51
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.68
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.65
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.33
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.21
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 84.49
211 TestMountStart/serial/StartWithMountFirst 28.13
212 TestMountStart/serial/VerifyMountFirst 0.38
213 TestMountStart/serial/StartWithMountSecond 26.65
214 TestMountStart/serial/VerifyMountSecond 0.38
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.38
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 20.89
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 111.19
223 TestMultiNode/serial/DeployApp2Nodes 5.56
224 TestMultiNode/serial/PingHostFrom2Pods 0.8
225 TestMultiNode/serial/AddNode 51.85
226 TestMultiNode/serial/MultiNodeLabels 0.07
227 TestMultiNode/serial/ProfileList 0.24
228 TestMultiNode/serial/CopyFile 7.43
229 TestMultiNode/serial/StopNode 2.35
230 TestMultiNode/serial/StartAfterStop 39.96
232 TestMultiNode/serial/DeleteNode 2.35
234 TestMultiNode/serial/RestartMultiNode 176.48
235 TestMultiNode/serial/ValidateNameConflict 42.59
242 TestScheduledStopUnix 114.78
246 TestRunningBinaryUpgrade 221.17
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 96.91
253 TestStoppedBinaryUpgrade/Setup 2.26
254 TestStoppedBinaryUpgrade/Upgrade 141.92
255 TestNoKubernetes/serial/StartWithStopK8s 65.99
256 TestNoKubernetes/serial/Start 29.69
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 23.58
259 TestNoKubernetes/serial/Stop 3.13
260 TestNoKubernetes/serial/StartNoArgs 23.91
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
277 TestNetworkPlugins/group/false 3.39
282 TestPause/serial/Start 56.67
287 TestStartStop/group/no-preload/serial/FirstStart 78.07
289 TestStartStop/group/embed-certs/serial/FirstStart 71.79
290 TestStartStop/group/no-preload/serial/DeployApp 9.29
291 TestStartStop/group/embed-certs/serial/DeployApp 11.27
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.06
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
305 TestStartStop/group/no-preload/serial/SecondStart 682.52
306 TestStartStop/group/embed-certs/serial/SecondStart 613.82
307 TestStartStop/group/old-k8s-version/serial/Stop 4.29
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 509.45
321 TestStartStop/group/newest-cni/serial/FirstStart 44.18
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
324 TestStartStop/group/newest-cni/serial/Stop 10.35
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
326 TestStartStop/group/newest-cni/serial/SecondStart 36.46
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
330 TestStartStop/group/newest-cni/serial/Pause 4.49
331 TestNetworkPlugins/group/auto/Start 63.33
332 TestNetworkPlugins/group/kindnet/Start 94
333 TestNetworkPlugins/group/calico/Start 127.54
334 TestNetworkPlugins/group/auto/KubeletFlags 0.25
335 TestNetworkPlugins/group/auto/NetCatPod 12.47
336 TestNetworkPlugins/group/auto/DNS 0.2
337 TestNetworkPlugins/group/auto/Localhost 0.15
338 TestNetworkPlugins/group/auto/HairPin 0.13
339 TestNetworkPlugins/group/custom-flannel/Start 76.2
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
342 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
343 TestNetworkPlugins/group/kindnet/DNS 0.19
344 TestNetworkPlugins/group/kindnet/Localhost 0.13
345 TestNetworkPlugins/group/kindnet/HairPin 0.17
346 TestNetworkPlugins/group/enable-default-cni/Start 59.31
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.22
349 TestNetworkPlugins/group/calico/NetCatPod 11.29
350 TestNetworkPlugins/group/calico/DNS 0.16
351 TestNetworkPlugins/group/calico/Localhost 0.15
352 TestNetworkPlugins/group/calico/HairPin 0.14
353 TestNetworkPlugins/group/flannel/Start 73.48
354 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
355 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
356 TestNetworkPlugins/group/bridge/Start 82.54
357 TestNetworkPlugins/group/custom-flannel/DNS 0.16
358 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
359 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
360 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
361 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
363 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
364 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
365 TestNetworkPlugins/group/flannel/ControllerPod 6.01
366 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
367 TestNetworkPlugins/group/flannel/NetCatPod 11.22
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
369 TestNetworkPlugins/group/bridge/NetCatPod 10.22
370 TestNetworkPlugins/group/flannel/DNS 0.24
371 TestNetworkPlugins/group/flannel/Localhost 0.15
372 TestNetworkPlugins/group/flannel/HairPin 0.13
373 TestNetworkPlugins/group/bridge/DNS 0.18
374 TestNetworkPlugins/group/bridge/Localhost 0.14
375 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (25.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-232599 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-232599 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.185355274s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-232599
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-232599: exit status 85 (66.884076ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-232599 | jenkins | v1.33.1 | 26 Aug 24 10:46 UTC |          |
	|         | -p download-only-232599        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 10:46:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 10:46:34.756025  106610 out.go:345] Setting OutFile to fd 1 ...
	I0826 10:46:34.756289  106610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 10:46:34.756297  106610 out.go:358] Setting ErrFile to fd 2...
	I0826 10:46:34.756302  106610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 10:46:34.756481  106610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	W0826 10:46:34.756664  106610 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19501-99403/.minikube/config/config.json: open /home/jenkins/minikube-integration/19501-99403/.minikube/config/config.json: no such file or directory
	I0826 10:46:34.757249  106610 out.go:352] Setting JSON to true
	I0826 10:46:34.758241  106610 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1736,"bootTime":1724667459,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 10:46:34.758312  106610 start.go:139] virtualization: kvm guest
	I0826 10:46:34.760810  106610 out.go:97] [download-only-232599] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0826 10:46:34.760978  106610 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball: no such file or directory
	I0826 10:46:34.761048  106610 notify.go:220] Checking for updates...
	I0826 10:46:34.762618  106610 out.go:169] MINIKUBE_LOCATION=19501
	I0826 10:46:34.764063  106610 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 10:46:34.765534  106610 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 10:46:34.766734  106610 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 10:46:34.768349  106610 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0826 10:46:34.771313  106610 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0826 10:46:34.771578  106610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 10:46:34.890253  106610 out.go:97] Using the kvm2 driver based on user configuration
	I0826 10:46:34.890286  106610 start.go:297] selected driver: kvm2
	I0826 10:46:34.890299  106610 start.go:901] validating driver "kvm2" against <nil>
	I0826 10:46:34.890646  106610 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 10:46:34.890799  106610 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 10:46:34.907648  106610 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 10:46:34.907744  106610 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 10:46:34.908240  106610 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0826 10:46:34.908388  106610 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 10:46:34.908442  106610 cni.go:84] Creating CNI manager for ""
	I0826 10:46:34.908458  106610 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 10:46:34.908468  106610 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 10:46:34.908517  106610 start.go:340] cluster config:
	{Name:download-only-232599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-232599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 10:46:34.908702  106610 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 10:46:34.911007  106610 out.go:97] Downloading VM boot image ...
	I0826 10:46:34.911066  106610 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0826 10:46:45.080255  106610 out.go:97] Starting "download-only-232599" primary control-plane node in "download-only-232599" cluster
	I0826 10:46:45.080284  106610 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 10:46:45.176388  106610 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0826 10:46:45.176429  106610 cache.go:56] Caching tarball of preloaded images
	I0826 10:46:45.176581  106610 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 10:46:45.178528  106610 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0826 10:46:45.178565  106610 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0826 10:46:45.280058  106610 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0826 10:46:58.091382  106610 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0826 10:46:58.091483  106610 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0826 10:46:59.114119  106610 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0826 10:46:59.114453  106610 profile.go:143] Saving config to /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/download-only-232599/config.json ...
	I0826 10:46:59.114485  106610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/download-only-232599/config.json: {Name:mke98eb21e62462a960ee64305949f838a64da75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0826 10:46:59.114650  106610 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0826 10:46:59.114818  106610 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-232599 host does not exist
	  To start a cluster, run: "minikube start -p download-only-232599"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-232599
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (12.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-210128 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-210128 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.338033397s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (12.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-210128
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-210128: exit status 85 (61.99306ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-232599 | jenkins | v1.33.1 | 26 Aug 24 10:46 UTC |                     |
	|         | -p download-only-232599        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:47 UTC |
	| delete  | -p download-only-232599        | download-only-232599 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC | 26 Aug 24 10:47 UTC |
	| start   | -o=json --download-only        | download-only-210128 | jenkins | v1.33.1 | 26 Aug 24 10:47 UTC |                     |
	|         | -p download-only-210128        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/26 10:47:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0826 10:47:00.291638  106861 out.go:345] Setting OutFile to fd 1 ...
	I0826 10:47:00.291921  106861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 10:47:00.291930  106861 out.go:358] Setting ErrFile to fd 2...
	I0826 10:47:00.291934  106861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 10:47:00.292132  106861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 10:47:00.292737  106861 out.go:352] Setting JSON to true
	I0826 10:47:00.293639  106861 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1761,"bootTime":1724667459,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 10:47:00.293709  106861 start.go:139] virtualization: kvm guest
	I0826 10:47:00.296036  106861 out.go:97] [download-only-210128] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 10:47:00.296207  106861 notify.go:220] Checking for updates...
	I0826 10:47:00.297720  106861 out.go:169] MINIKUBE_LOCATION=19501
	I0826 10:47:00.299275  106861 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 10:47:00.300805  106861 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 10:47:00.302287  106861 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 10:47:00.304279  106861 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0826 10:47:00.307805  106861 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0826 10:47:00.308045  106861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 10:47:00.342226  106861 out.go:97] Using the kvm2 driver based on user configuration
	I0826 10:47:00.342261  106861 start.go:297] selected driver: kvm2
	I0826 10:47:00.342276  106861 start.go:901] validating driver "kvm2" against <nil>
	I0826 10:47:00.342626  106861 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 10:47:00.342725  106861 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19501-99403/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0826 10:47:00.364154  106861 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0826 10:47:00.364222  106861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0826 10:47:00.364697  106861 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0826 10:47:00.364852  106861 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0826 10:47:00.364888  106861 cni.go:84] Creating CNI manager for ""
	I0826 10:47:00.364898  106861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0826 10:47:00.364909  106861 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0826 10:47:00.364973  106861 start.go:340] cluster config:
	{Name:download-only-210128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-210128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 10:47:00.365068  106861 iso.go:125] acquiring lock: {Name:mkbcdec5f9bc9f35782823301ebec92449f1b0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0826 10:47:00.366875  106861 out.go:97] Starting "download-only-210128" primary control-plane node in "download-only-210128" cluster
	I0826 10:47:00.366903  106861 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 10:47:00.812605  106861 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0826 10:47:00.812659  106861 cache.go:56] Caching tarball of preloaded images
	I0826 10:47:00.812875  106861 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0826 10:47:00.814873  106861 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0826 10:47:00.814904  106861 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0826 10:47:00.915457  106861 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19501-99403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-210128 host does not exist
	  To start a cluster, run: "minikube start -p download-only-210128"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-210128
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-754943 --alsologtostderr --binary-mirror http://127.0.0.1:44369 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-754943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-754943
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (90.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-511327 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-511327 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.003331881s)
helpers_test.go:175: Cleaning up "offline-crio-511327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-511327
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-511327: (1.084688911s)
--- PASS: TestOffline (90.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-530639
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-530639: exit status 85 (51.697498ms)

                                                
                                                
-- stdout --
	* Profile "addons-530639" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-530639"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-530639
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-530639: exit status 85 (53.380618ms)

                                                
                                                
-- stdout --
	* Profile "addons-530639" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-530639"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (140.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-530639 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-530639 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m20.129106907s)
--- PASS: TestAddons/Setup (140.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-530639 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-530639 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.123885ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-22wjc" [32d6b7ea-5422-4b4d-a7fe-209b1fae6bb8] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00445038s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vmr7f" [b4617f2b-ddb1-47b0-baf2-2418c37ffd7f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00434436s
addons_test.go:342: (dbg) Run:  kubectl --context addons-530639 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-530639 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-530639 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.401226289s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 ip
2024/08/26 10:50:19 [DEBUG] GET http://192.168.39.11:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xfr8k" [3741a81a-87ec-4508-87b8-211f87532513] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004916869s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-530639
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-530639: (5.833796542s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.54s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.398221ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-rr874" [a5ad8512-3f72-43be-a53c-23106bcd3367] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.021957641s
addons_test.go:475: (dbg) Run:  kubectl --context addons-530639 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-530639 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.827203524s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.54s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.143943ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-530639 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-530639 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [26b79ec7-8ea7-488f-95d8-400147c4ee7a] Pending
helpers_test.go:344: "task-pv-pod" [26b79ec7-8ea7-488f-95d8-400147c4ee7a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [26b79ec7-8ea7-488f-95d8-400147c4ee7a] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004236641s
addons_test.go:590: (dbg) Run:  kubectl --context addons-530639 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-530639 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-530639 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-530639 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-530639 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-530639 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-530639 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [056a63b5-0c77-483f-a500-1cf89fcd88f5] Pending
helpers_test.go:344: "task-pv-pod-restore" [056a63b5-0c77-483f-a500-1cf89fcd88f5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [056a63b5-0c77-483f-a500-1cf89fcd88f5] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004681765s
addons_test.go:632: (dbg) Run:  kubectl --context addons-530639 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-530639 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-530639 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-530639 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.943545808s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-530639 addons disable volumesnapshots --alsologtostderr -v=1: (1.158182559s)
--- PASS: TestAddons/parallel/CSI (62.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-530639 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-7nfpw" [1990d7fd-159e-400b-bc55-8289ab481a9b] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-7nfpw" [1990d7fd-159e-400b-bc55-8289ab481a9b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-7nfpw" [1990d7fd-159e-400b-bc55-8289ab481a9b] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.102321561s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-530639 addons disable headlamp --alsologtostderr -v=1: (5.926047915s)
--- PASS: TestAddons/parallel/Headlamp (19.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-gvz84" [768650e4-f74c-4cbf-bd3b-8bce57cefd4a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004634778s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-530639
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-530639 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-530639 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b5b30886-348b-4032-bced-16c4efc90d0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b5b30886-348b-4032-bced-16c4efc90d0c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b5b30886-348b-4032-bced-16c4efc90d0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003321873s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-530639 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 ssh "cat /opt/local-path-provisioner/pvc-d9488103-fa6b-4b30-86cd-3775be1f0d86_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-530639 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-530639 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-530639 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.793651827s)
--- PASS: TestAddons/parallel/LocalPath (57.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dwxvz" [ec199bca-5011-4285-b91f-ad5994dfe228] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003473039s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-530639
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.91s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-hqx8h" [f3328086-cbc2-431c-b40f-6487ab41b743] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.013312677s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-530639 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-530639 addons disable yakd --alsologtostderr -v=1: (6.088242777s)
--- PASS: TestAddons/parallel/Yakd (11.10s)

                                                
                                    
x
+
TestCertOptions (88.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-373568 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-373568 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m27.385789798s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-373568 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-373568 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-373568 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-373568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-373568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-373568: (1.026147699s)
--- PASS: TestCertOptions (88.87s)

                                                
                                    
x
+
TestCertExpiration (376.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-156240 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0826 11:57:03.550226  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-156240 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m3.01604459s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-156240 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-156240 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (2m12.887300072s)
helpers_test.go:175: Cleaning up "cert-expiration-156240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-156240
--- PASS: TestCertExpiration (376.90s)

                                                
                                    
x
+
TestForceSystemdFlag (74.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-399339 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0826 11:57:20.477496  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-399339 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.543807638s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-399339 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-399339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-399339
--- PASS: TestForceSystemdFlag (74.54s)

                                                
                                    
x
+
TestForceSystemdEnv (46.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-585377 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-585377 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.004287025s)
helpers_test.go:175: Cleaning up "force-systemd-env-585377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-585377
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-585377: (1.277890783s)
--- PASS: TestForceSystemdEnv (46.28s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.93s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.93s)

                                                
                                    
x
+
TestErrorSpam/setup (42.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-088378 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-088378 --driver=kvm2  --container-runtime=crio
E0826 10:59:34.327079  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:34.334308  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:34.345720  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:34.367157  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:34.408769  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:34.490295  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:34.651928  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:34.974313  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:35.616490  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:36.898216  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 10:59:39.460131  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-088378 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-088378 --driver=kvm2  --container-runtime=crio: (42.262344393s)
--- PASS: TestErrorSpam/setup (42.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 pause
E0826 10:59:44.582069  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (4.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 stop: (1.628771095s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 stop: (1.696262009s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-088378 --log_dir /tmp/nospam-088378 stop: (1.471451072s)
--- PASS: TestErrorSpam/stop (4.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19501-99403/.minikube/files/etc/test/nested/copy/106598/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497672 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0826 10:59:54.823834  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:00:15.305280  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-497672 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.834148932s)
--- PASS: TestFunctional/serial/StartWithProxy (56.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497672 --alsologtostderr -v=8
E0826 11:00:56.266702  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-497672 --alsologtostderr -v=8: (38.600469158s)
functional_test.go:663: soft start took 38.601239408s for "functional-497672" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-497672 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 cache add registry.k8s.io/pause:3.1: (1.795646453s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 cache add registry.k8s.io/pause:3.3: (1.866560202s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 cache add registry.k8s.io/pause:latest: (1.693486273s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-497672 /tmp/TestFunctionalserialCacheCmdcacheadd_local3615105707/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cache add minikube-local-cache-test:functional-497672
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 cache add minikube-local-cache-test:functional-497672: (2.300748843s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cache delete minikube-local-cache-test:functional-497672
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-497672
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.015322ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 cache reload: (1.498430541s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 kubectl -- --context functional-497672 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-497672 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497672 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-497672 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.780038401s)
functional_test.go:761: restart took 33.780160232s for "functional-497672" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-497672 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 logs: (1.391810752s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 logs --file /tmp/TestFunctionalserialLogsFileCmd44604685/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 logs --file /tmp/TestFunctionalserialLogsFileCmd44604685/001/logs.txt: (1.435948363s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-497672 apply -f testdata/invalidsvc.yaml
E0826 11:02:18.189275  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-497672
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-497672: exit status 115 (283.210326ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.235:30377 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-497672 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 config get cpus: exit status 14 (76.121656ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 config get cpus: exit status 14 (52.986573ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-497672 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-497672 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 115783: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497672 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-497672 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.857421ms)

                                                
                                                
-- stdout --
	* [functional-497672] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:02:36.010342  115279 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:02:36.010606  115279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:02:36.010619  115279 out.go:358] Setting ErrFile to fd 2...
	I0826 11:02:36.010624  115279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:02:36.010939  115279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:02:36.011561  115279 out.go:352] Setting JSON to false
	I0826 11:02:36.012703  115279 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2697,"bootTime":1724667459,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:02:36.012775  115279 start.go:139] virtualization: kvm guest
	I0826 11:02:36.014996  115279 out.go:177] * [functional-497672] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:02:36.016537  115279 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:02:36.016550  115279 notify.go:220] Checking for updates...
	I0826 11:02:36.019241  115279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:02:36.020489  115279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:02:36.021957  115279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:02:36.023266  115279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:02:36.024417  115279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:02:36.026069  115279 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:02:36.026543  115279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:02:36.026613  115279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:02:36.042524  115279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37251
	I0826 11:02:36.043017  115279 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:02:36.043906  115279 main.go:141] libmachine: Using API Version  1
	I0826 11:02:36.043934  115279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:02:36.044345  115279 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:02:36.044550  115279 main.go:141] libmachine: (functional-497672) Calling .DriverName
	I0826 11:02:36.044904  115279 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:02:36.045336  115279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:02:36.045385  115279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:02:36.060858  115279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I0826 11:02:36.061346  115279 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:02:36.061868  115279 main.go:141] libmachine: Using API Version  1
	I0826 11:02:36.061893  115279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:02:36.062306  115279 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:02:36.062540  115279 main.go:141] libmachine: (functional-497672) Calling .DriverName
	I0826 11:02:36.098091  115279 out.go:177] * Using the kvm2 driver based on existing profile
	I0826 11:02:36.099477  115279 start.go:297] selected driver: kvm2
	I0826 11:02:36.099510  115279 start.go:901] validating driver "kvm2" against &{Name:functional-497672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-497672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:02:36.099689  115279 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:02:36.101966  115279 out.go:201] 
	W0826 11:02:36.103197  115279 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0826 11:02:36.104378  115279 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497672 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497672 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-497672 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.46553ms)

                                                
                                                
-- stdout --
	* [functional-497672] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:02:44.559416  115538 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:02:44.559663  115538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:02:44.559672  115538 out.go:358] Setting ErrFile to fd 2...
	I0826 11:02:44.559677  115538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:02:44.559972  115538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:02:44.560485  115538 out.go:352] Setting JSON to false
	I0826 11:02:44.561469  115538 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2706,"bootTime":1724667459,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:02:44.561534  115538 start.go:139] virtualization: kvm guest
	I0826 11:02:44.563842  115538 out.go:177] * [functional-497672] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0826 11:02:44.565592  115538 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:02:44.565624  115538 notify.go:220] Checking for updates...
	I0826 11:02:44.568119  115538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:02:44.569627  115538 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:02:44.571253  115538 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:02:44.572906  115538 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:02:44.574379  115538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:02:44.576307  115538 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:02:44.576814  115538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:02:44.576894  115538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:02:44.593966  115538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I0826 11:02:44.594489  115538 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:02:44.595168  115538 main.go:141] libmachine: Using API Version  1
	I0826 11:02:44.595193  115538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:02:44.595670  115538 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:02:44.595928  115538 main.go:141] libmachine: (functional-497672) Calling .DriverName
	I0826 11:02:44.596292  115538 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:02:44.596783  115538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:02:44.596911  115538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:02:44.613093  115538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I0826 11:02:44.613571  115538 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:02:44.614236  115538 main.go:141] libmachine: Using API Version  1
	I0826 11:02:44.614267  115538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:02:44.614670  115538 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:02:44.614907  115538 main.go:141] libmachine: (functional-497672) Calling .DriverName
	I0826 11:02:44.657864  115538 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0826 11:02:44.659262  115538 start.go:297] selected driver: kvm2
	I0826 11:02:44.659280  115538 start.go:901] validating driver "kvm2" against &{Name:functional-497672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-497672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0826 11:02:44.659389  115538 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:02:44.661847  115538 out.go:201] 
	W0826 11:02:44.663343  115538 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0826 11:02:44.664640  115538 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-497672 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-497672 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-p7zhs" [9c367a8b-33aa-46fa-a740-b6ba00a762b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-p7zhs" [9c367a8b-33aa-46fa-a740-b6ba00a762b6] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.021863114s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.235:30332
functional_test.go:1675: http://192.168.39.235:30332: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-p7zhs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.235:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.235:30332
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [71ff2cdf-e915-4a94-8b05-edb7396888a2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004108433s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-497672 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-497672 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-497672 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-497672 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-497672 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [facabfeb-b795-478b-bedf-40a98df1e986] Pending
helpers_test.go:344: "sp-pod" [facabfeb-b795-478b-bedf-40a98df1e986] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [facabfeb-b795-478b-bedf-40a98df1e986] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004562523s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-497672 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-497672 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-497672 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [893f16ed-e6eb-41af-8d8c-e4d64fe9104c] Pending
helpers_test.go:344: "sp-pod" [893f16ed-e6eb-41af-8d8c-e4d64fe9104c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [893f16ed-e6eb-41af-8d8c-e4d64fe9104c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005388755s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-497672 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh -n functional-497672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cp functional-497672:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1684201716/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh -n functional-497672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh -n functional-497672 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-497672 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-p4djv" [6968ac50-d2ef-4484-ac69-64c6775a3bb9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-p4djv" [6968ac50-d2ef-4484-ac69-64c6775a3bb9] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004816697s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-497672 exec mysql-6cdb49bbb-p4djv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-497672 exec mysql-6cdb49bbb-p4djv -- mysql -ppassword -e "show databases;": exit status 1 (257.781368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-497672 exec mysql-6cdb49bbb-p4djv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-497672 exec mysql-6cdb49bbb-p4djv -- mysql -ppassword -e "show databases;": exit status 1 (142.172244ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-497672 exec mysql-6cdb49bbb-p4djv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/106598/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo cat /etc/test/nested/copy/106598/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/106598.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo cat /etc/ssl/certs/106598.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/106598.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo cat /usr/share/ca-certificates/106598.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/1065982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo cat /etc/ssl/certs/1065982.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/1065982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo cat /usr/share/ca-certificates/1065982.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-497672 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 ssh "sudo systemctl is-active docker": exit status 1 (218.721931ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 ssh "sudo systemctl is-active containerd": exit status 1 (210.006596ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-497672 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-497672 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-bm9wq" [3df165bd-26e4-47e2-ab78-1aa6b84501c3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-bm9wq" [3df165bd-26e4-47e2-ab78-1aa6b84501c3] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004188891s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 service list -o json
functional_test.go:1494: Took "529.584081ms" to run "out/minikube-linux-amd64 -p functional-497672 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.235:30357
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.235:30357
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "361.807829ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "185.227561ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "294.458673ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "55.186159ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdany-port2457573744/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724670156263177718" to /tmp/TestFunctionalparallelMountCmdany-port2457573744/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724670156263177718" to /tmp/TestFunctionalparallelMountCmdany-port2457573744/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724670156263177718" to /tmp/TestFunctionalparallelMountCmdany-port2457573744/001/test-1724670156263177718
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.303287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 26 11:02 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 26 11:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 26 11:02 test-1724670156263177718
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh cat /mount-9p/test-1724670156263177718
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-497672 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f8e3a5cd-a240-4f3c-a28e-33211d59e5b8] Pending
helpers_test.go:344: "busybox-mount" [f8e3a5cd-a240-4f3c-a28e-33211d59e5b8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f8e3a5cd-a240-4f3c-a28e-33211d59e5b8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f8e3a5cd-a240-4f3c-a28e-33211d59e5b8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.004639605s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-497672 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdany-port2457573744/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497672 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-497672
localhost/kicbase/echo-server:functional-497672
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497672 image ls --format short --alsologtostderr:
I0826 11:02:58.080609  116694 out.go:345] Setting OutFile to fd 1 ...
I0826 11:02:58.080721  116694 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.080729  116694 out.go:358] Setting ErrFile to fd 2...
I0826 11:02:58.080735  116694 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.080949  116694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
I0826 11:02:58.081578  116694 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.081689  116694 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.082118  116694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.082183  116694 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.098883  116694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
I0826 11:02:58.099392  116694 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.099978  116694 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.100004  116694 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.100540  116694 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.100754  116694 main.go:141] libmachine: (functional-497672) Calling .GetState
I0826 11:02:58.103013  116694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.103057  116694 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.119097  116694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
I0826 11:02:58.119712  116694 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.120261  116694 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.120284  116694 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.120633  116694 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.120869  116694 main.go:141] libmachine: (functional-497672) Calling .DriverName
I0826 11:02:58.121150  116694 ssh_runner.go:195] Run: systemctl --version
I0826 11:02:58.121190  116694 main.go:141] libmachine: (functional-497672) Calling .GetSSHHostname
I0826 11:02:58.124426  116694 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.124866  116694 main.go:141] libmachine: (functional-497672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:53:9d", ip: ""} in network mk-functional-497672: {Iface:virbr1 ExpiryTime:2024-08-26 12:00:07 +0000 UTC Type:0 Mac:52:54:00:3d:53:9d Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-497672 Clientid:01:52:54:00:3d:53:9d}
I0826 11:02:58.124905  116694 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined IP address 192.168.39.235 and MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.125026  116694 main.go:141] libmachine: (functional-497672) Calling .GetSSHPort
I0826 11:02:58.125202  116694 main.go:141] libmachine: (functional-497672) Calling .GetSSHKeyPath
I0826 11:02:58.125340  116694 main.go:141] libmachine: (functional-497672) Calling .GetSSHUsername
I0826 11:02:58.125521  116694 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/functional-497672/id_rsa Username:docker}
I0826 11:02:58.204133  116694 ssh_runner.go:195] Run: sudo crictl images --output json
I0826 11:02:58.283189  116694 main.go:141] libmachine: Making call to close driver server
I0826 11:02:58.283207  116694 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:02:58.283480  116694 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:02:58.283492  116694 main.go:141] libmachine: (functional-497672) DBG | Closing plugin on server side
I0826 11:02:58.283501  116694 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:02:58.283511  116694 main.go:141] libmachine: Making call to close driver server
I0826 11:02:58.283520  116694 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:02:58.283799  116694 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:02:58.283814  116694 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:02:58.283840  116694 main.go:141] libmachine: (functional-497672) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls --format table --alsologtostderr
2024/08/26 11:02:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497672 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-497672  | 80d9ee7f4c186 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| localhost/kicbase/echo-server           | functional-497672  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497672 image ls --format table --alsologtostderr:
I0826 11:02:58.575250  116805 out.go:345] Setting OutFile to fd 1 ...
I0826 11:02:58.575582  116805 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.575626  116805 out.go:358] Setting ErrFile to fd 2...
I0826 11:02:58.575642  116805 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.575925  116805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
I0826 11:02:58.576757  116805 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.576917  116805 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.578384  116805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.578437  116805 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.594965  116805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
I0826 11:02:58.595449  116805 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.596030  116805 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.596069  116805 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.596502  116805 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.596709  116805 main.go:141] libmachine: (functional-497672) Calling .GetState
I0826 11:02:58.598763  116805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.598812  116805 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.614696  116805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
I0826 11:02:58.615235  116805 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.615759  116805 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.615786  116805 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.616167  116805 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.616367  116805 main.go:141] libmachine: (functional-497672) Calling .DriverName
I0826 11:02:58.616583  116805 ssh_runner.go:195] Run: systemctl --version
I0826 11:02:58.616614  116805 main.go:141] libmachine: (functional-497672) Calling .GetSSHHostname
I0826 11:02:58.619630  116805 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.620019  116805 main.go:141] libmachine: (functional-497672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:53:9d", ip: ""} in network mk-functional-497672: {Iface:virbr1 ExpiryTime:2024-08-26 12:00:07 +0000 UTC Type:0 Mac:52:54:00:3d:53:9d Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-497672 Clientid:01:52:54:00:3d:53:9d}
I0826 11:02:58.620051  116805 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined IP address 192.168.39.235 and MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.620167  116805 main.go:141] libmachine: (functional-497672) Calling .GetSSHPort
I0826 11:02:58.620392  116805 main.go:141] libmachine: (functional-497672) Calling .GetSSHKeyPath
I0826 11:02:58.620596  116805 main.go:141] libmachine: (functional-497672) Calling .GetSSHUsername
I0826 11:02:58.620803  116805 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/functional-497672/id_rsa Username:docker}
I0826 11:02:58.697729  116805 ssh_runner.go:195] Run: sudo crictl images --output json
I0826 11:02:58.740209  116805 main.go:141] libmachine: Making call to close driver server
I0826 11:02:58.740227  116805 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:02:58.740494  116805 main.go:141] libmachine: (functional-497672) DBG | Closing plugin on server side
I0826 11:02:58.740519  116805 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:02:58.740533  116805 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:02:58.740542  116805 main.go:141] libmachine: Making call to close driver server
I0826 11:02:58.740554  116805 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:02:58.740790  116805 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:02:58.740806  116805 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:02:58.740822  116805 main.go:141] libmachine: (functional-497672) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497672 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"80d9ee7f4c186307776428cd13b71da7e40116796bf96d94a16e2675ba25141e","repoDigests":["localhost/minikube-local-cache-test@sha256:3cae40e1153d51b9015869a5676f0c26017f5ef23212fdb26d13c7d43482129f"],"repoTags":["localhost/minikube-local-cache-test:functional-497672"],"size":"3328"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id"
:"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/my
sql:5.7"],"size":"519571821"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"]
,"size":"89437512"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-497672"],"size":"4943877"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredn
s/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/das
hboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@
sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497672 image ls --format json --alsologtostderr:
I0826 11:02:58.338870  116748 out.go:345] Setting OutFile to fd 1 ...
I0826 11:02:58.339360  116748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.339369  116748 out.go:358] Setting ErrFile to fd 2...
I0826 11:02:58.339374  116748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.339604  116748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
I0826 11:02:58.340313  116748 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.340452  116748 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.340923  116748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.340978  116748 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.358138  116748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
I0826 11:02:58.358599  116748 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.359211  116748 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.359254  116748 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.359690  116748 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.359927  116748 main.go:141] libmachine: (functional-497672) Calling .GetState
I0826 11:02:58.361868  116748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.361917  116748 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.378575  116748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
I0826 11:02:58.379106  116748 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.379738  116748 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.379767  116748 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.380126  116748 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.380339  116748 main.go:141] libmachine: (functional-497672) Calling .DriverName
I0826 11:02:58.380609  116748 ssh_runner.go:195] Run: systemctl --version
I0826 11:02:58.380655  116748 main.go:141] libmachine: (functional-497672) Calling .GetSSHHostname
I0826 11:02:58.384191  116748 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.384774  116748 main.go:141] libmachine: (functional-497672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:53:9d", ip: ""} in network mk-functional-497672: {Iface:virbr1 ExpiryTime:2024-08-26 12:00:07 +0000 UTC Type:0 Mac:52:54:00:3d:53:9d Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-497672 Clientid:01:52:54:00:3d:53:9d}
I0826 11:02:58.384802  116748 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined IP address 192.168.39.235 and MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.384965  116748 main.go:141] libmachine: (functional-497672) Calling .GetSSHPort
I0826 11:02:58.385154  116748 main.go:141] libmachine: (functional-497672) Calling .GetSSHKeyPath
I0826 11:02:58.385389  116748 main.go:141] libmachine: (functional-497672) Calling .GetSSHUsername
I0826 11:02:58.385553  116748 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/functional-497672/id_rsa Username:docker}
I0826 11:02:58.479741  116748 ssh_runner.go:195] Run: sudo crictl images --output json
I0826 11:02:58.519303  116748 main.go:141] libmachine: Making call to close driver server
I0826 11:02:58.519319  116748 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:02:58.519654  116748 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:02:58.519677  116748 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:02:58.519694  116748 main.go:141] libmachine: Making call to close driver server
I0826 11:02:58.519703  116748 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:02:58.520712  116748 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:02:58.520730  116748 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:02:58.520734  116748 main.go:141] libmachine: (functional-497672) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497672 image ls --format yaml --alsologtostderr:
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 80d9ee7f4c186307776428cd13b71da7e40116796bf96d94a16e2675ba25141e
repoDigests:
- localhost/minikube-local-cache-test@sha256:3cae40e1153d51b9015869a5676f0c26017f5ef23212fdb26d13c7d43482129f
repoTags:
- localhost/minikube-local-cache-test:functional-497672
size: "3328"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-497672
size: "4943877"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497672 image ls --format yaml --alsologtostderr:
I0826 11:02:58.083090  116693 out.go:345] Setting OutFile to fd 1 ...
I0826 11:02:58.083245  116693 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.083258  116693 out.go:358] Setting ErrFile to fd 2...
I0826 11:02:58.083265  116693 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.083848  116693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
I0826 11:02:58.084491  116693 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.084603  116693 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.084994  116693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.085056  116693 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.100313  116693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
I0826 11:02:58.100811  116693 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.101373  116693 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.101398  116693 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.101819  116693 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.102063  116693 main.go:141] libmachine: (functional-497672) Calling .GetState
I0826 11:02:58.104075  116693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.104135  116693 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.119505  116693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
I0826 11:02:58.119914  116693 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.120416  116693 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.120441  116693 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.120871  116693 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.121102  116693 main.go:141] libmachine: (functional-497672) Calling .DriverName
I0826 11:02:58.121316  116693 ssh_runner.go:195] Run: systemctl --version
I0826 11:02:58.121343  116693 main.go:141] libmachine: (functional-497672) Calling .GetSSHHostname
I0826 11:02:58.124656  116693 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.125071  116693 main.go:141] libmachine: (functional-497672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:53:9d", ip: ""} in network mk-functional-497672: {Iface:virbr1 ExpiryTime:2024-08-26 12:00:07 +0000 UTC Type:0 Mac:52:54:00:3d:53:9d Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-497672 Clientid:01:52:54:00:3d:53:9d}
I0826 11:02:58.125101  116693 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined IP address 192.168.39.235 and MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.125226  116693 main.go:141] libmachine: (functional-497672) Calling .GetSSHPort
I0826 11:02:58.125408  116693 main.go:141] libmachine: (functional-497672) Calling .GetSSHKeyPath
I0826 11:02:58.125601  116693 main.go:141] libmachine: (functional-497672) Calling .GetSSHUsername
I0826 11:02:58.125759  116693 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/functional-497672/id_rsa Username:docker}
I0826 11:02:58.202339  116693 ssh_runner.go:195] Run: sudo crictl images --output json
I0826 11:02:58.268130  116693 main.go:141] libmachine: Making call to close driver server
I0826 11:02:58.268151  116693 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:02:58.268461  116693 main.go:141] libmachine: (functional-497672) DBG | Closing plugin on server side
I0826 11:02:58.268487  116693 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:02:58.268500  116693 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:02:58.268513  116693 main.go:141] libmachine: Making call to close driver server
I0826 11:02:58.268526  116693 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:02:58.268796  116693 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:02:58.268811  116693 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:02:58.268848  116693 main.go:141] libmachine: (functional-497672) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 ssh pgrep buildkitd: exit status 1 (209.604242ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image build -t localhost/my-image:functional-497672 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 image build -t localhost/my-image:functional-497672 testdata/build --alsologtostderr: (3.153587221s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497672 image build -t localhost/my-image:functional-497672 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a4f5c3dc336
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-497672
--> 1cb4d78f4b6
Successfully tagged localhost/my-image:functional-497672
1cb4d78f4b6143ba155a1cfec114c60847a4db96530f650bfc1d342c87142685
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497672 image build -t localhost/my-image:functional-497672 testdata/build --alsologtostderr:
I0826 11:02:58.530011  116793 out.go:345] Setting OutFile to fd 1 ...
I0826 11:02:58.530233  116793 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.530249  116793 out.go:358] Setting ErrFile to fd 2...
I0826 11:02:58.530258  116793 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0826 11:02:58.530554  116793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
I0826 11:02:58.531396  116793 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.532016  116793 config.go:182] Loaded profile config "functional-497672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0826 11:02:58.532431  116793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.532469  116793 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.548178  116793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
I0826 11:02:58.548841  116793 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.549539  116793 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.549566  116793 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.549965  116793 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.550171  116793 main.go:141] libmachine: (functional-497672) Calling .GetState
I0826 11:02:58.552590  116793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0826 11:02:58.552646  116793 main.go:141] libmachine: Launching plugin server for driver kvm2
I0826 11:02:58.572200  116793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
I0826 11:02:58.572692  116793 main.go:141] libmachine: () Calling .GetVersion
I0826 11:02:58.573141  116793 main.go:141] libmachine: Using API Version  1
I0826 11:02:58.573162  116793 main.go:141] libmachine: () Calling .SetConfigRaw
I0826 11:02:58.573473  116793 main.go:141] libmachine: () Calling .GetMachineName
I0826 11:02:58.573676  116793 main.go:141] libmachine: (functional-497672) Calling .DriverName
I0826 11:02:58.573886  116793 ssh_runner.go:195] Run: systemctl --version
I0826 11:02:58.573918  116793 main.go:141] libmachine: (functional-497672) Calling .GetSSHHostname
I0826 11:02:58.577003  116793 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.577406  116793 main.go:141] libmachine: (functional-497672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:53:9d", ip: ""} in network mk-functional-497672: {Iface:virbr1 ExpiryTime:2024-08-26 12:00:07 +0000 UTC Type:0 Mac:52:54:00:3d:53:9d Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-497672 Clientid:01:52:54:00:3d:53:9d}
I0826 11:02:58.577450  116793 main.go:141] libmachine: (functional-497672) DBG | domain functional-497672 has defined IP address 192.168.39.235 and MAC address 52:54:00:3d:53:9d in network mk-functional-497672
I0826 11:02:58.577631  116793 main.go:141] libmachine: (functional-497672) Calling .GetSSHPort
I0826 11:02:58.577796  116793 main.go:141] libmachine: (functional-497672) Calling .GetSSHKeyPath
I0826 11:02:58.577946  116793 main.go:141] libmachine: (functional-497672) Calling .GetSSHUsername
I0826 11:02:58.578055  116793 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/functional-497672/id_rsa Username:docker}
I0826 11:02:58.653425  116793 build_images.go:161] Building image from path: /tmp/build.1969736538.tar
I0826 11:02:58.653501  116793 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0826 11:02:58.663444  116793 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1969736538.tar
I0826 11:02:58.668749  116793 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1969736538.tar: stat -c "%s %y" /var/lib/minikube/build/build.1969736538.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1969736538.tar': No such file or directory
I0826 11:02:58.668788  116793 ssh_runner.go:362] scp /tmp/build.1969736538.tar --> /var/lib/minikube/build/build.1969736538.tar (3072 bytes)
I0826 11:02:58.699350  116793 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1969736538
I0826 11:02:58.717987  116793 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1969736538 -xf /var/lib/minikube/build/build.1969736538.tar
I0826 11:02:58.739763  116793 crio.go:315] Building image: /var/lib/minikube/build/build.1969736538
I0826 11:02:58.739848  116793 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-497672 /var/lib/minikube/build/build.1969736538 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0826 11:03:01.602779  116793 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-497672 /var/lib/minikube/build/build.1969736538 --cgroup-manager=cgroupfs: (2.862907007s)
I0826 11:03:01.602874  116793 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1969736538
I0826 11:03:01.617905  116793 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1969736538.tar
I0826 11:03:01.632663  116793 build_images.go:217] Built localhost/my-image:functional-497672 from /tmp/build.1969736538.tar
I0826 11:03:01.632710  116793 build_images.go:133] succeeded building to: functional-497672
I0826 11:03:01.632718  116793 build_images.go:134] failed building to: 
I0826 11:03:01.632750  116793 main.go:141] libmachine: Making call to close driver server
I0826 11:03:01.632765  116793 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:03:01.633162  116793 main.go:141] libmachine: (functional-497672) DBG | Closing plugin on server side
I0826 11:03:01.633195  116793 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:03:01.633207  116793 main.go:141] libmachine: Making call to close connection to plugin binary
I0826 11:03:01.633216  116793 main.go:141] libmachine: Making call to close driver server
I0826 11:03:01.633225  116793 main.go:141] libmachine: (functional-497672) Calling .Close
I0826 11:03:01.633558  116793 main.go:141] libmachine: Successfully made call to close driver server
I0826 11:03:01.633575  116793 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.864955693s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-497672
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image load --daemon kicbase/echo-server:functional-497672 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 image load --daemon kicbase/echo-server:functional-497672 --alsologtostderr: (1.469306785s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image load --daemon kicbase/echo-server:functional-497672 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-497672
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image load --daemon kicbase/echo-server:functional-497672 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdspecific-port1952134315/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (198.308342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdspecific-port1952134315/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 ssh "sudo umount -f /mount-9p": exit status 1 (274.260654ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-497672 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdspecific-port1952134315/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image save kicbase/echo-server:functional-497672 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2670535824/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2670535824/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2670535824/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T" /mount1: exit status 1 (314.178263ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-497672 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2670535824/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2670535824/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2670535824/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image rm kicbase/echo-server:functional-497672 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-497672 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.899258851s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-497672
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-497672 image save --daemon kicbase/echo-server:functional-497672 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-497672
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-497672
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-497672
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-497672
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-055395 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0826 11:04:34.328233  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:05:02.031362  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-055395 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.517739404s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-055395 -- rollout status deployment/busybox: (3.781802386s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-8cc92 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-gbwm6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-xh6vw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-8cc92 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-gbwm6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-xh6vw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-8cc92 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-gbwm6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-xh6vw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-8cc92 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-8cc92 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-gbwm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-gbwm6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-xh6vw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-055395 -- exec busybox-7dff88458-xh6vw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-055395 -v=7 --alsologtostderr
E0826 11:07:20.477020  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:20.483516  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:20.494974  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:20.516470  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:20.558001  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:20.639475  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:20.801092  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:21.122441  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:21.764189  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:23.045784  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:07:25.607365  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-055395 -v=7 --alsologtostderr: (57.912748089s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
E0826 11:07:30.729122  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-055395 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp testdata/cp-test.txt ha-055395:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395:/home/docker/cp-test.txt ha-055395-m02:/home/docker/cp-test_ha-055395_ha-055395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m02 "sudo cat /home/docker/cp-test_ha-055395_ha-055395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395:/home/docker/cp-test.txt ha-055395-m03:/home/docker/cp-test_ha-055395_ha-055395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m03 "sudo cat /home/docker/cp-test_ha-055395_ha-055395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395:/home/docker/cp-test.txt ha-055395-m04:/home/docker/cp-test_ha-055395_ha-055395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m04 "sudo cat /home/docker/cp-test_ha-055395_ha-055395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp testdata/cp-test.txt ha-055395-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m02:/home/docker/cp-test.txt ha-055395:/home/docker/cp-test_ha-055395-m02_ha-055395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395 "sudo cat /home/docker/cp-test_ha-055395-m02_ha-055395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m02:/home/docker/cp-test.txt ha-055395-m03:/home/docker/cp-test_ha-055395-m02_ha-055395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m03 "sudo cat /home/docker/cp-test_ha-055395-m02_ha-055395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m02:/home/docker/cp-test.txt ha-055395-m04:/home/docker/cp-test_ha-055395-m02_ha-055395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m04 "sudo cat /home/docker/cp-test_ha-055395-m02_ha-055395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp testdata/cp-test.txt ha-055395-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt ha-055395:/home/docker/cp-test_ha-055395-m03_ha-055395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395 "sudo cat /home/docker/cp-test_ha-055395-m03_ha-055395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt ha-055395-m02:/home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m02 "sudo cat /home/docker/cp-test_ha-055395-m03_ha-055395-m02.txt"
E0826 11:07:40.971744  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m03:/home/docker/cp-test.txt ha-055395-m04:/home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m04 "sudo cat /home/docker/cp-test_ha-055395-m03_ha-055395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp testdata/cp-test.txt ha-055395-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3651242830/001/cp-test_ha-055395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt ha-055395:/home/docker/cp-test_ha-055395-m04_ha-055395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395 "sudo cat /home/docker/cp-test_ha-055395-m04_ha-055395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt ha-055395-m02:/home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m02 "sudo cat /home/docker/cp-test_ha-055395-m04_ha-055395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 cp ha-055395-m04:/home/docker/cp-test.txt ha-055395-m03:/home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 ssh -n ha-055395-m03 "sudo cat /home/docker/cp-test_ha-055395-m04_ha-055395-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.495352978s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-055395 node delete m03 -v=7 --alsologtostderr: (15.687420207s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (455.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-055395 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0826 11:22:20.477805  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:23:43.543554  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:24:34.326528  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 11:27:20.478104  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-055395 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m34.516929516s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (455.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-055395 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-055395 --control-plane -v=7 --alsologtostderr: (1m19.231231613s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-055395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-295856 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0826 11:29:34.329852  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-295856 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.508138878s)
--- PASS: TestJSONOutput/start/Command (58.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-295856 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-295856 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-295856 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-295856 --output=json --user=testUser: (7.333708126s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-273091 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-273091 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.251488ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"734c41f6-a13b-494d-b0db-d41f2ccd0e03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-273091] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0713c05-3ced-47eb-8434-3c2189dd6e67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19501"}}
	{"specversion":"1.0","id":"6a6d6716-3bcd-4062-b836-7bfca57ff23f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ccffc630-b506-4f43-8da8-240f441eeca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig"}}
	{"specversion":"1.0","id":"a6b4a08a-21e5-41e1-b2a7-aec0b3e31781","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube"}}
	{"specversion":"1.0","id":"8b318035-9bdb-4040-8367-a30804d13d9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3baf7101-20af-49d5-a88d-f86e7933a599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8aeab725-7ad7-42c2-b689-8a8c520b64be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-273091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-273091
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-999935 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-999935 --driver=kvm2  --container-runtime=crio: (39.203914939s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-002046 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-002046 --driver=kvm2  --container-runtime=crio: (42.584237847s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-999935
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-002046
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-002046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-002046
helpers_test.go:175: Cleaning up "first-999935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-999935
--- PASS: TestMinikubeProfile (84.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-035463 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-035463 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.128867347s)
E0826 11:32:20.477256  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountFirst (28.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-035463 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-035463 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-051286 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0826 11:32:37.396560  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-051286 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.652333401s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-051286 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-051286 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-035463 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-051286 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-051286 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-051286
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-051286: (1.277155797s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-051286
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-051286: (19.894457658s)
--- PASS: TestMountStart/serial/RestartStopped (20.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-051286 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-051286 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-523807 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0826 11:34:34.328937  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-523807 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.748732585s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-523807 -- rollout status deployment/busybox: (4.044109088s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-9mhm9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-g59tc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-9mhm9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-g59tc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-9mhm9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-g59tc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-9mhm9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-9mhm9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-g59tc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-523807 -- exec busybox-7dff88458-g59tc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-523807 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-523807 -v 3 --alsologtostderr: (51.241385216s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-523807 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp testdata/cp-test.txt multinode-523807:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4218272271/001/cp-test_multinode-523807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807:/home/docker/cp-test.txt multinode-523807-m02:/home/docker/cp-test_multinode-523807_multinode-523807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m02 "sudo cat /home/docker/cp-test_multinode-523807_multinode-523807-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807:/home/docker/cp-test.txt multinode-523807-m03:/home/docker/cp-test_multinode-523807_multinode-523807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m03 "sudo cat /home/docker/cp-test_multinode-523807_multinode-523807-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp testdata/cp-test.txt multinode-523807-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4218272271/001/cp-test_multinode-523807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807-m02:/home/docker/cp-test.txt multinode-523807:/home/docker/cp-test_multinode-523807-m02_multinode-523807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807 "sudo cat /home/docker/cp-test_multinode-523807-m02_multinode-523807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807-m02:/home/docker/cp-test.txt multinode-523807-m03:/home/docker/cp-test_multinode-523807-m02_multinode-523807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m03 "sudo cat /home/docker/cp-test_multinode-523807-m02_multinode-523807-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp testdata/cp-test.txt multinode-523807-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4218272271/001/cp-test_multinode-523807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt multinode-523807:/home/docker/cp-test_multinode-523807-m03_multinode-523807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807 "sudo cat /home/docker/cp-test_multinode-523807-m03_multinode-523807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 cp multinode-523807-m03:/home/docker/cp-test.txt multinode-523807-m02:/home/docker/cp-test_multinode-523807-m03_multinode-523807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 ssh -n multinode-523807-m02 "sudo cat /home/docker/cp-test_multinode-523807-m03_multinode-523807-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-523807 node stop m03: (1.451646217s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-523807 status: exit status 7 (448.345073ms)

                                                
                                                
-- stdout --
	multinode-523807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-523807-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-523807-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-523807 status --alsologtostderr: exit status 7 (446.257196ms)

                                                
                                                
-- stdout --
	multinode-523807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-523807-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-523807-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:36:11.517137  134871 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:36:11.517417  134871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:36:11.517429  134871 out.go:358] Setting ErrFile to fd 2...
	I0826 11:36:11.517433  134871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:36:11.517655  134871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:36:11.517886  134871 out.go:352] Setting JSON to false
	I0826 11:36:11.517920  134871 mustload.go:65] Loading cluster: multinode-523807
	I0826 11:36:11.517961  134871 notify.go:220] Checking for updates...
	I0826 11:36:11.518334  134871 config.go:182] Loaded profile config "multinode-523807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:36:11.518351  134871 status.go:255] checking status of multinode-523807 ...
	I0826 11:36:11.518751  134871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:36:11.518819  134871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:36:11.534753  134871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0826 11:36:11.535285  134871 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:36:11.535885  134871 main.go:141] libmachine: Using API Version  1
	I0826 11:36:11.535913  134871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:36:11.536229  134871 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:36:11.536512  134871 main.go:141] libmachine: (multinode-523807) Calling .GetState
	I0826 11:36:11.538119  134871 status.go:330] multinode-523807 host status = "Running" (err=<nil>)
	I0826 11:36:11.538140  134871 host.go:66] Checking if "multinode-523807" exists ...
	I0826 11:36:11.538492  134871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:36:11.538538  134871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:36:11.555730  134871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0826 11:36:11.556296  134871 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:36:11.556829  134871 main.go:141] libmachine: Using API Version  1
	I0826 11:36:11.556860  134871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:36:11.557160  134871 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:36:11.557394  134871 main.go:141] libmachine: (multinode-523807) Calling .GetIP
	I0826 11:36:11.560131  134871 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:36:11.560557  134871 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:36:11.560589  134871 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:36:11.560721  134871 host.go:66] Checking if "multinode-523807" exists ...
	I0826 11:36:11.561032  134871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:36:11.561090  134871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:36:11.577377  134871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I0826 11:36:11.577849  134871 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:36:11.578384  134871 main.go:141] libmachine: Using API Version  1
	I0826 11:36:11.578403  134871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:36:11.578747  134871 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:36:11.578965  134871 main.go:141] libmachine: (multinode-523807) Calling .DriverName
	I0826 11:36:11.579162  134871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:36:11.579183  134871 main.go:141] libmachine: (multinode-523807) Calling .GetSSHHostname
	I0826 11:36:11.582078  134871 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:36:11.582557  134871 main.go:141] libmachine: (multinode-523807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:92", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:33:26 +0000 UTC Type:0 Mac:52:54:00:e1:ad:92 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-523807 Clientid:01:52:54:00:e1:ad:92}
	I0826 11:36:11.582619  134871 main.go:141] libmachine: (multinode-523807) DBG | domain multinode-523807 has defined IP address 192.168.39.26 and MAC address 52:54:00:e1:ad:92 in network mk-multinode-523807
	I0826 11:36:11.582697  134871 main.go:141] libmachine: (multinode-523807) Calling .GetSSHPort
	I0826 11:36:11.582934  134871 main.go:141] libmachine: (multinode-523807) Calling .GetSSHKeyPath
	I0826 11:36:11.583087  134871 main.go:141] libmachine: (multinode-523807) Calling .GetSSHUsername
	I0826 11:36:11.583219  134871 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807/id_rsa Username:docker}
	I0826 11:36:11.672566  134871 ssh_runner.go:195] Run: systemctl --version
	I0826 11:36:11.679638  134871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:36:11.696439  134871 kubeconfig.go:125] found "multinode-523807" server: "https://192.168.39.26:8443"
	I0826 11:36:11.696476  134871 api_server.go:166] Checking apiserver status ...
	I0826 11:36:11.696518  134871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0826 11:36:11.711286  134871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1077/cgroup
	W0826 11:36:11.721187  134871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1077/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0826 11:36:11.721257  134871 ssh_runner.go:195] Run: ls
	I0826 11:36:11.725877  134871 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I0826 11:36:11.730237  134871 api_server.go:279] https://192.168.39.26:8443/healthz returned 200:
	ok
	I0826 11:36:11.730275  134871 status.go:422] multinode-523807 apiserver status = Running (err=<nil>)
	I0826 11:36:11.730307  134871 status.go:257] multinode-523807 status: &{Name:multinode-523807 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:36:11.730331  134871 status.go:255] checking status of multinode-523807-m02 ...
	I0826 11:36:11.730643  134871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:36:11.730685  134871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:36:11.747712  134871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0826 11:36:11.748220  134871 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:36:11.748840  134871 main.go:141] libmachine: Using API Version  1
	I0826 11:36:11.748867  134871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:36:11.749214  134871 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:36:11.749428  134871 main.go:141] libmachine: (multinode-523807-m02) Calling .GetState
	I0826 11:36:11.751308  134871 status.go:330] multinode-523807-m02 host status = "Running" (err=<nil>)
	I0826 11:36:11.751326  134871 host.go:66] Checking if "multinode-523807-m02" exists ...
	I0826 11:36:11.751620  134871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:36:11.751657  134871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:36:11.772592  134871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40679
	I0826 11:36:11.773072  134871 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:36:11.773607  134871 main.go:141] libmachine: Using API Version  1
	I0826 11:36:11.773629  134871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:36:11.773988  134871 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:36:11.774170  134871 main.go:141] libmachine: (multinode-523807-m02) Calling .GetIP
	I0826 11:36:11.777046  134871 main.go:141] libmachine: (multinode-523807-m02) DBG | domain multinode-523807-m02 has defined MAC address 52:54:00:97:db:05 in network mk-multinode-523807
	I0826 11:36:11.777460  134871 main.go:141] libmachine: (multinode-523807-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:db:05", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:34:26 +0000 UTC Type:0 Mac:52:54:00:97:db:05 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-523807-m02 Clientid:01:52:54:00:97:db:05}
	I0826 11:36:11.777486  134871 main.go:141] libmachine: (multinode-523807-m02) DBG | domain multinode-523807-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:97:db:05 in network mk-multinode-523807
	I0826 11:36:11.777696  134871 host.go:66] Checking if "multinode-523807-m02" exists ...
	I0826 11:36:11.778020  134871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:36:11.778083  134871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:36:11.794128  134871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0826 11:36:11.794549  134871 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:36:11.795052  134871 main.go:141] libmachine: Using API Version  1
	I0826 11:36:11.795084  134871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:36:11.795374  134871 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:36:11.795562  134871 main.go:141] libmachine: (multinode-523807-m02) Calling .DriverName
	I0826 11:36:11.795749  134871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0826 11:36:11.795770  134871 main.go:141] libmachine: (multinode-523807-m02) Calling .GetSSHHostname
	I0826 11:36:11.798581  134871 main.go:141] libmachine: (multinode-523807-m02) DBG | domain multinode-523807-m02 has defined MAC address 52:54:00:97:db:05 in network mk-multinode-523807
	I0826 11:36:11.799039  134871 main.go:141] libmachine: (multinode-523807-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:db:05", ip: ""} in network mk-multinode-523807: {Iface:virbr1 ExpiryTime:2024-08-26 12:34:26 +0000 UTC Type:0 Mac:52:54:00:97:db:05 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-523807-m02 Clientid:01:52:54:00:97:db:05}
	I0826 11:36:11.799070  134871 main.go:141] libmachine: (multinode-523807-m02) DBG | domain multinode-523807-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:97:db:05 in network mk-multinode-523807
	I0826 11:36:11.799195  134871 main.go:141] libmachine: (multinode-523807-m02) Calling .GetSSHPort
	I0826 11:36:11.799375  134871 main.go:141] libmachine: (multinode-523807-m02) Calling .GetSSHKeyPath
	I0826 11:36:11.799501  134871 main.go:141] libmachine: (multinode-523807-m02) Calling .GetSSHUsername
	I0826 11:36:11.799624  134871 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19501-99403/.minikube/machines/multinode-523807-m02/id_rsa Username:docker}
	I0826 11:36:11.882515  134871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0826 11:36:11.895985  134871 status.go:257] multinode-523807-m02 status: &{Name:multinode-523807-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0826 11:36:11.896046  134871 status.go:255] checking status of multinode-523807-m03 ...
	I0826 11:36:11.896493  134871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0826 11:36:11.896539  134871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0826 11:36:11.912428  134871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I0826 11:36:11.912942  134871 main.go:141] libmachine: () Calling .GetVersion
	I0826 11:36:11.913461  134871 main.go:141] libmachine: Using API Version  1
	I0826 11:36:11.913497  134871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0826 11:36:11.913841  134871 main.go:141] libmachine: () Calling .GetMachineName
	I0826 11:36:11.914033  134871 main.go:141] libmachine: (multinode-523807-m03) Calling .GetState
	I0826 11:36:11.915611  134871 status.go:330] multinode-523807-m03 host status = "Stopped" (err=<nil>)
	I0826 11:36:11.915627  134871 status.go:343] host is not running, skipping remaining checks
	I0826 11:36:11.915634  134871 status.go:257] multinode-523807-m03 status: &{Name:multinode-523807-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-523807 node start m03 -v=7 --alsologtostderr: (39.292499944s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 node delete m03
E0826 11:42:20.476992  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-523807 node delete m03: (1.811997183s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (176.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-523807 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0826 11:47:20.477703  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-523807 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m55.923831434s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-523807 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (176.48s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-523807
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-523807-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-523807-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.405686ms)

                                                
                                                
-- stdout --
	* [multinode-523807-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-523807-m02' is duplicated with machine name 'multinode-523807-m02' in profile 'multinode-523807'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-523807-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-523807-m03 --driver=kvm2  --container-runtime=crio: (41.456377483s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-523807
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-523807: exit status 80 (220.451326ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-523807 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-523807-m03 already exists in multinode-523807-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-523807-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.59s)

                                                
                                    
x
+
TestScheduledStopUnix (114.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-934937 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-934937 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.146719318s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934937 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-934937 -n scheduled-stop-934937
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934937 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934937 --cancel-scheduled
E0826 11:52:20.477439  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934937 -n scheduled-stop-934937
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-934937
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934937 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-934937
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-934937: exit status 7 (66.782728ms)

                                                
                                                
-- stdout --
	scheduled-stop-934937
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934937 -n scheduled-stop-934937
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934937 -n scheduled-stop-934937: exit status 7 (64.343727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-934937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-934937
--- PASS: TestScheduledStopUnix (114.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (221.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2880548991 start -p running-upgrade-669690 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2880548991 start -p running-upgrade-669690 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m21.413047282s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-669690 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-669690 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.789633268s)
helpers_test.go:175: Cleaning up "running-upgrade-669690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-669690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-669690: (1.222837307s)
--- PASS: TestRunningBinaryUpgrade (221.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-533322 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-533322 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.522113ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-533322] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-533322 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-533322 --driver=kvm2  --container-runtime=crio: (1m36.652719088s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-533322 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2091257693 start -p stopped-upgrade-867428 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2091257693 start -p stopped-upgrade-867428 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m26.419001398s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2091257693 -p stopped-upgrade-867428 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2091257693 -p stopped-upgrade-867428 stop: (1.477088809s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-867428 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-867428 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.022215376s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (141.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (65.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-533322 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-533322 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m4.567147621s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-533322 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-533322 status -o json: exit status 2 (264.20449ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-533322","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-533322
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-533322: (1.157212248s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (65.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-533322 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-533322 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.69365695s)
--- PASS: TestNoKubernetes/serial/Start (29.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-533322 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-533322 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.734795ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (23.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.376212631s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (9.199072934s)
--- PASS: TestNoKubernetes/serial/ProfileList (23.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-533322
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-533322: (3.130591154s)
--- PASS: TestNoKubernetes/serial/Stop (3.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-533322 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-533322 --driver=kvm2  --container-runtime=crio: (23.912333498s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-867428
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-533322 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-533322 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.239078ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-814705 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-814705 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (107.866064ms)

                                                
                                                
-- stdout --
	* [false-814705] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0826 11:58:34.481637  146855 out.go:345] Setting OutFile to fd 1 ...
	I0826 11:58:34.481754  146855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:58:34.481764  146855 out.go:358] Setting ErrFile to fd 2...
	I0826 11:58:34.481768  146855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0826 11:58:34.481977  146855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19501-99403/.minikube/bin
	I0826 11:58:34.482564  146855 out.go:352] Setting JSON to false
	I0826 11:58:34.483668  146855 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6055,"bootTime":1724667459,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0826 11:58:34.483733  146855 start.go:139] virtualization: kvm guest
	I0826 11:58:34.486006  146855 out.go:177] * [false-814705] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0826 11:58:34.487234  146855 out.go:177]   - MINIKUBE_LOCATION=19501
	I0826 11:58:34.487274  146855 notify.go:220] Checking for updates...
	I0826 11:58:34.489421  146855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0826 11:58:34.490670  146855 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19501-99403/kubeconfig
	I0826 11:58:34.491692  146855 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19501-99403/.minikube
	I0826 11:58:34.492640  146855 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0826 11:58:34.493677  146855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0826 11:58:34.495342  146855 config.go:182] Loaded profile config "cert-expiration-156240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:58:34.495449  146855 config.go:182] Loaded profile config "cert-options-373568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0826 11:58:34.495526  146855 config.go:182] Loaded profile config "kubernetes-upgrade-117510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0826 11:58:34.495618  146855 driver.go:392] Setting default libvirt URI to qemu:///system
	I0826 11:58:34.535735  146855 out.go:177] * Using the kvm2 driver based on user configuration
	I0826 11:58:34.536947  146855 start.go:297] selected driver: kvm2
	I0826 11:58:34.536964  146855 start.go:901] validating driver "kvm2" against <nil>
	I0826 11:58:34.536976  146855 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0826 11:58:34.538921  146855 out.go:201] 
	W0826 11:58:34.540251  146855 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0826 11:58:34.541767  146855 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-814705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-814705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 26 Aug 2024 11:58:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.32:8443
name: cert-expiration-156240
contexts:
- context:
cluster: cert-expiration-156240
extensions:
- extension:
last-update: Mon, 26 Aug 2024 11:58:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-156240
name: cert-expiration-156240
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-156240
user:
client-certificate: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/cert-expiration-156240/client.crt
client-key: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/cert-expiration-156240/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-814705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-814705"

                                                
                                                
----------------------- debugLogs end: false-814705 [took: 3.114016934s] --------------------------------
helpers_test.go:175: Cleaning up "false-814705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-814705
--- PASS: TestNetworkPlugins/group/false (3.39s)

                                                
                                    
x
+
TestPause/serial/Start (56.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-585941 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-585941 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (56.668273878s)
--- PASS: TestPause/serial/Start (56.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-956479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-956479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m18.074252855s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-923586 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-923586 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m11.785944646s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-956479 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [58a90708-2af8-4c1e-b40b-f4a0bf127889] Pending
helpers_test.go:344: "busybox" [58a90708-2af8-4c1e-b40b-f4a0bf127889] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [58a90708-2af8-4c1e-b40b-f4a0bf127889] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004539071s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-956479 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-923586 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e5d5916e-046a-4c91-9dfa-e052cba89f7f] Pending
helpers_test.go:344: "busybox" [e5d5916e-046a-4c91-9dfa-e052cba89f7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e5d5916e-046a-4c91-9dfa-e052cba89f7f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004851059s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-923586 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-956479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-956479 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-923586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-923586 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-697869 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-697869 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (55.060169589s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-697869 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f47809c2-1c34-483a-a1b9-e89d5e9295b5] Pending
helpers_test.go:344: "busybox" [f47809c2-1c34-483a-a1b9-e89d5e9295b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f47809c2-1c34-483a-a1b9-e89d5e9295b5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004095206s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-697869 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-697869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-697869 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (682.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-956479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-956479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m22.255555043s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-956479 -n no-preload-956479
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (682.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (613.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-923586 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-923586 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m13.55365704s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923586 -n embed-certs-923586
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (613.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-839656 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-839656 --alsologtostderr -v=3: (4.292831177s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-839656 -n old-k8s-version-839656: exit status 7 (65.485585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-839656 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (509.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-697869 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0826 12:07:20.477888  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:09:34.327504  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:12:20.477353  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:13:43.551915  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:14:34.327359  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-697869 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (8m29.183576844s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697869 -n default-k8s-diff-port-697869
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (509.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-114926 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0826 12:29:34.327368  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/addons-530639/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-114926 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (44.180794564s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-114926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-114926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.117051063s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-114926 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-114926 --alsologtostderr -v=3: (10.352847044s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-114926 -n newest-cni-114926
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-114926 -n newest-cni-114926: exit status 7 (65.583699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-114926 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-114926 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0826 12:30:23.554134  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-114926 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (36.121112083s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-114926 -n newest-cni-114926
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-114926 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-114926 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-114926 --alsologtostderr -v=1: (1.922014903s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-114926 -n newest-cni-114926
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-114926 -n newest-cni-114926: exit status 2 (341.748346ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-114926 -n newest-cni-114926
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-114926 -n newest-cni-114926: exit status 2 (334.4332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-114926 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-114926 --alsologtostderr -v=1: (1.0262912s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-114926 -n newest-cni-114926
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-114926 -n newest-cni-114926
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m3.32701492s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m34.000905338s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (127.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m7.544070978s)
--- PASS: TestNetworkPlugins/group/calico/Start (127.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-814705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-814705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-stmvd" [b952de15-46d8-43be-9380-cb6b69c843da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-stmvd" [b952de15-46d8-43be-9380-cb6b69c843da] Running
E0826 12:31:53.091902  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:31:53.098339  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:31:53.109840  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003966529s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-814705 exec deployment/netcat -- nslookup kubernetes.default
E0826 12:31:53.131588  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:31:53.173752  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:31:53.255222  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0826 12:31:53.417594  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m16.196038884s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8q7ph" [283262ae-11fc-4976-a5bf-d59cb33dde95] Running
E0826 12:32:13.588517  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/no-preload-956479/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0048462s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-814705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-814705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nxpx5" [78cd6ef9-662a-4c21-9fbe-f8c3dd2060fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0826 12:32:20.477194  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/functional-497672/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nxpx5" [78cd6ef9-662a-4c21-9fbe-f8c3dd2060fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005343682s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-814705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (59.306221568s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4xgnq" [e77ff4d4-1fd3-4006-889c-c8ec21ad86e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006139316s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-814705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-814705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ckgcf" [6e7e0096-a83d-4231-8464-e614c0609e25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0826 12:33:02.372045  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:02.378483  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:02.390021  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:02.411609  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:02.454004  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:02.535606  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:02.697897  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:03.019811  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:03.661262  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
E0826 12:33:04.943345  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-ckgcf" [6e7e0096-a83d-4231-8464-e614c0609e25] Running
E0826 12:33:07.504942  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/old-k8s-version-839656/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005638909s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-814705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m13.481608671s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-814705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-814705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bxqk2" [7d2888d9-3dd5-45d4-a6bb-3a7afd64b797] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bxqk2" [7d2888d9-3dd5-45d4-a6bb-3a7afd64b797] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004245749s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-814705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m22.540761183s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-814705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-814705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-814705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cxbgw" [cfbd60d7-6f55-4778-957b-4b5e35b8c039] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cxbgw" [cfbd60d7-6f55-4778-957b-4b5e35b8c039] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004301101s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-814705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7qv7n" [6b420817-e4ec-41d1-895b-0ea719da1ca5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004912631s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-814705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-814705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kgn5q" [df726c12-e0b5-494f-bca8-3728f9e89101] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kgn5q" [df726c12-e0b5-494f-bca8-3728f9e89101] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004808626s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-814705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-814705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9jvxz" [af6d2ac7-3af6-44a8-a7b4-2da8ece3df7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9jvxz" [af6d2ac7-3af6-44a8-a7b4-2da8ece3df7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004872323s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-814705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0826 12:34:55.204542  106598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/default-k8s-diff-port-697869/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-814705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-814705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
266 TestStartStop/group/disable-driver-mounts 0.14
272 TestNetworkPlugins/group/kubenet 2.9
280 TestNetworkPlugins/group/cilium 5.39
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-148783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-148783
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-814705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-814705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 26 Aug 2024 11:58:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.32:8443
name: cert-expiration-156240
contexts:
- context:
cluster: cert-expiration-156240
extensions:
- extension:
last-update: Mon, 26 Aug 2024 11:58:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-156240
name: cert-expiration-156240
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-156240
user:
client-certificate: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/cert-expiration-156240/client.crt
client-key: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/cert-expiration-156240/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-814705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-814705"

                                                
                                                
----------------------- debugLogs end: kubenet-814705 [took: 2.757048943s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-814705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-814705
--- SKIP: TestNetworkPlugins/group/kubenet (2.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-814705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-814705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19501-99403/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 26 Aug 2024 11:58:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.32:8443
name: cert-expiration-156240
contexts:
- context:
cluster: cert-expiration-156240
extensions:
- extension:
last-update: Mon, 26 Aug 2024 11:58:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-156240
name: cert-expiration-156240
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-156240
user:
client-certificate: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/cert-expiration-156240/client.crt
client-key: /home/jenkins/minikube-integration/19501-99403/.minikube/profiles/cert-expiration-156240/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-814705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-814705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-814705"

                                                
                                                
----------------------- debugLogs end: cilium-814705 [took: 5.247754283s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-814705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-814705
--- SKIP: TestNetworkPlugins/group/cilium (5.39s)

                                                
                                    
Copied to clipboard